At the start of the summer Angel Business Communications, working closely with the Data Centre Alliance, organised the latest in a series of Data Centre Transformation events, held at the University of Manchester (a full report can be found later on in the digital magazine). What I found most interesting and even inspirational, was the fact that the topics up for discussion varied somewhat from your ‘average’ data centre event. Yes, there’s no escaping the fact that good old data centre facilities need to be tried and tested, and very reliable, so any new ideas and technologies need to have their tyres kicked a good few times before being accepted. However, as with the world of IT, it does seem that, when it comes to the data centre, more and more end users are becoming aware that, whatever their thoughts on the future, they cannot afford to ignore the ideas and technologies that plenty of their peers are adopting.
Yes, if you have a greenfield site, or just a greenfield company, then you have no legacy baggage, and making decisions around data centres and IT is relatively easy. For the many companies who have plenty of existing infrastructure, the path forwards might be less clear, but there’s no doubt that, if you don’t start looking at the new ideas and technologies very soon, you could well be bypassed by your competition – whether that’s the start-ups, or just your regular competitors who are moving just that little bit quicker than you.
Ultimately, it’s all about customer experience and reliability. I remember reading years ago words of the MD of one of the embryonic budget airlines, who basically said that if customers were paying so little for their flights, did it really matter if their luggage didn’t make the journey with them?(!) Now, that same budget airline which, unsurprisingly had a poor reputation, has set about addressing its reputation, thanks in large part to technology, plus a different corporate mindset, and one of it’s main rivals is now perceived to have the customer service problem.
So, park the data centre and IT for a minute, put yourself in your customers’ shoes, and work out what they can reasonably expect, in terms of price, performance and after-sales service. Then work out how you need to develop your data centre and IT real estate to deliver on this ideal. I’m fairly sure that you won’t be able to keep your customers as happy as you want them to be without a major re-evaluation of the technology that you are using right now, with a view to how this needs to change.
And if edge, IoT and Artificial Intelligence leave you cold, perhaps it’s time to retire?!
Sponsor Video
Robert Half’s new report, Digital transformation and the future of hiring has found that digital processes will be extended to manual, data entry tasks such as financial modelling (41%), generating financial reports (40%), project management and reporting (38%) within the next three years. As a result, payroll (37%), financial planning (33%), accounts payable (38%) and accounts receivable (32%) are expected to be the roles impacted by automation by 2022.
Digitalisation has already emerged as a business priority and is set to impact the future of business by offering new technologies to address threats and opportunities for a competitive advantage. Overall, 87% of executives have recognised the positive impact that the growing reliance on technology holds for organisations.
“Digitalisation will offer a new approach where labour and time-intensive processes can be shifted to allow for more value-added work to take place,” explained Matt Weston, Director at Robert Half UK. “Automation is impacting traditional business functions in a big way. Finance is no exception and professionals will need to be prepared to hold a more prominent and integrated influence on the wider business, gaining new skills that will see them through the technological shift.”
The main benefits that businesses are expecting, or already achieving from digital transformation, include improved efficiency and productivity, better decision-making and employees taking on more value add work leading to more fulfilling careers in the long-term. Overall, finance executives believe digitalisation will increase the productivity of each individual (59%), enable employees to focus less on data entry and more on the execution of tasks (53%), providing opportunities to learn new capabilities (51%).
“While a technical understanding will remain the core competence that provides professional credibility, it will need to be enhanced with soft skills,” added Peter Simons, head of future of finance, CIMA (Chartered Institute of Management Accountants). “We are already seeing this move occur within the finance department with the shift from technical to commercial skills. In the future, financial insights won’t just come from financial analysis but collaborating with other areas of the business. Traditionally labelled ‘professional services’ executives will need to engage with people, ask questions, have empathy and communicate in a compelling way to make informed business decisions.”
Research commissioned by M-Files Corporation, the intelligent information management company, has revealed that poor information management practices are preventing UK businesses from realising the true potential of mobile and remote working.
The research, which was conducted by Vanson Bourne and polled 250 UK-based IT decision-makers, found that nearly nine out of every ten (89 per cent) respondents said their staff find it challenging at least some of the time to locate documents when working outside of the office or from mobile devices. Other key findings included:
Half (51 per cent) of respondents would like to be able to access company documents and files, and/or have the ability to edit them, when working remotely/on a mobile device; 44 per cent want the ability to approve documents with digital signatures; 40 per cent are not able to share or collaborate on documents remotely.
Commenting on the findings, Julian Cook, VP of UK Business at M-Files, said: “Remote and mobile working practices are proliferating throughout organisations, and are increasingly viewed as a must-have not only by Millennials entering the workforce, but also by more established members of staff. Effectively empowering your mobile workforce can make a big difference to productivity and efficiency, but only if the tools provided have the robust information management functionality required by organisations with the ease and simplicity employees demand. It’s clear from our research that this is something that most organisations aren’t currently able to offer their employees. Many remote workers still struggle to find the information they need and because of this must find workarounds, such as unauthorised file-sharing apps just to keep the wheels moving.
“The use of unsanctioned personal file-sharing apps at work by employees increases the risk of data breaches, reduces the IT department’s visibility and can raise compliance issues. Organisations intent on pursuing mobile and remote working initiatives therefore need to focus on providing the right tools to make these activities straightforward for everyone if they are to be successful,” Cook continued.
UK business leaders identify far fewer risks affecting their businesses, when compared to Germany and France, according to research from the Gowling WLG Digital Risk Calculator, which launches today. This new free tool allows small and medium size businesses to better understand their digital risks and compare these to other businesses and industries.
Research informing the Gowling WLG Digital Risk Calculator was gathered from 999 large SMEs in the UK, France and Germany. Findings revealed an overly optimistic picture among UK business leaders, with UK respondents identifying far fewer digital risks as a threat to their business; when compared to the views of their European counterparts. UK respondents consistently identified between 2 and 25% less than non-UK respondents for each risk area analysed.
Commenting on the research Helen Davenport, director at Gowling WLG, said: “The recent wide ranging external cyber-attacks such as the Wannacry and Petya hacks reinforce the real and immediate threat of cyber-crime to all organisations and businesses.
However, there tends to be an “it won’t happen to me” attitude among business leaders, who on one hand anticipate external cyber-attacks will increase over the next three years, but on the other fail to identify such areas of risk as a concern for them. This is likely preventing them from preparing suitably for digital threats that they may face.”
Respondents revealed that external cyber risks (69%) are thought to be the most concerning category of digital threat for businesses across all countries surveyed. This risk is anticipated to grow even further, with 51% of respondents believing that it will increase within the next three years.
Other digital risks of concern to participants include customer security (57%), identity theft / cloning (47%) and rogue employees (42%). More than a third of respondents (40%) also believe that the lack of sufficient technical and business knowledge amongst employees is a risk to their business.
Additionally, one third (32%) of UK businesses feel that digital risks related to regulatory issues have increased during the past three years. However, less than a third (29%) believe that regulatory issues are a risk to their business.
Risks related to highly sensitive/valuable data are the second most prominent risk to businesses (55%), according to respondents. However, when asked about the GDPR, which represents the most significant change to data protection legislation in the last 20 years, only one seventh (14%) of UK businesses were aware of the fines they may face for failing to protect their data. In comparison, 26% of respondents from Germany and 45% from France were aware of the maximum fine, placing UK business leaders at the back of the pack when it comes to understanding the risks posed by failure to comply with the GDPR.
Despite the identification of data risks, only 52% of UK businesses do regular data back-ups, compared to 66% in Germany and 67% in France. Moreover, only 32% of UK businesses and 39% of businesses in Germany open to using off-site storage for sensitive data today, compared to 50% of French businesses.
Given the changing nature of the digital world, the majority of business leaders (70%) involve IT support in their digital risk management. However, in comparison the number that say they involve legal support drops significantly down to an average across the surveyed nations of just 31% (46% UK, 23% Germany and 23% France, respectively).
When asked about how prepared they feel for their digital risks, only 16% of all respondents stated that they are fully prepared.
Patrick Arben, partner at Gowling WLG, comments: “When affected by a cyber-attack or any other digital threat, the immediate focus is to work with IT professionals to understand what has happened. However, it is always worth taking internal or external legal advice, before commencing an investigation and as circumstances change.
The essence for all business leaders is to stop ignoring the digital risks their companies face. By doing this, they can easily and proactively work to prevent future attacks from happening.”
According to recent research conducted by the Cloud Industry Forum (CIF), UK businesses looking to embrace digital transformation consider the cloud to be a crucial cog in this process. However, the evidence also demonstrates that there is room for the cloud to evolve further, in order to mitigate lingering cloud migration challenges and a desire to keep some resources on-premises. According to HyperGrid, this is where Enterprise Cloud-as-a-Service (ECaaS) can make a difference.
The CIF research, which polled 250 IT and business decision-makers across a range of UK organisations earlier this year, found that 92 per cent of companies polled consider the cloud to be quite important, very important or critical to their digital transformation strategy, which highlights the appetite for remotely based cloud. However, the survey also revealed that 63 per cent of IT decision-makers that use cloud-based services embrace a hybrid approach, with 43 per cent saying they intend to keep at least some business-critical apps or services on-premises.
Doug Rich, VP of EMEA at HyperGrid, believes that these trends point to the need for an approach which blends the benefits of cloud-based hosting with on-premises infrastructure in a way that goes beyond historic offerings.
Rich said: “CIF’s research has underlined just how crucial cloud is in enabling digital transformation, and there is a tangible desire amongst key decision-makers to embrace it on a more wholesale basis. Its core benefits, including the flexibility of consumption-based pricing and the agility it brings by freeing up IT departments to focus on innovation, are already well-documented. Despite this trend, there remains a steadfast need for companies to keep some of their applications and data a little closer to home. Hybrid IT environments have gone some way towards addressing these concerns, but shortcomings in the way we approach cloud and on-premises infrastructure remain. What businesses need is a service that effectively combines the best of both worlds in a way that has not been achieved before.”
To illustrate this point, the CIF research also found that 52 per cent of IT decision-makers found complexity of migration a difficulty when moving to a cloud solution. Alongside this, 48 per cent said privacy or security concerns are a barrier to digital transformation, with 50 per cent citing investments in legacy systems as a hurdle. These figures demonstrate how organisations are frequently obliged to keep at least some of their infrastructure on-premises, and how the available cloud solutions make it difficult for them to strike this balance.
Rich added: “It’s a near-impossible task to persuade businesses to migrate all of their data and applications to a remotely cloud-based solution. With this in mind, ECaaS is set to play a key role in defining how companies embrace cloud in the future. This solution goes a step further than current hybrid cloud arrangements, by enabling applications and resources stored in both public and private clouds to be easily managed from one central location. Crucially, ECaaS allows for the installation of public cloud on-premises, enabling businesses to benefit from a consumption-based usage model and third-party management of the infrastructure, while also gaining the security and peace of mind offered by keeping infrastructure on-premises.”
Doug Rich concluded: “Embracing digital transformation is vital if organisations want to maintain competitive advantage. Rather than spending valuable IT time and resources on figuring out how to reconcile cloud adoption with the retention of legacy infrastructure, decision-makers should look at how the cloud is evolving to bridge this gap whilst driving innovation.”
According to new research from thermal risk experts EkkoSense, almost eight out of ten UK data centres are currently non-compliant with recent ASHRAE Thermal Guidelines for Data Processing Environments.
EkkoSense recently analysed some 128 UK data centre halls and over 16,500 IT equipment racks – the industry’s largest and most accurate survey into data centre cooling - to reveal that 78% of UK data centres currently aren’t compliant with current practice ASHRAE thermal guidelines.
The ASHRAE standard – published in the organisation’s ‘Thermal Guidelines for Data Processing Environments – 4th Edition’ – is highly regarded as a best practice thermal guide for data centre operators, offering clear recommendations for effective data centre temperature testing. ASHRAE suggests that simply positioning temperature sensors on data centre columns and walls is no longer enough, and that data centre operators should – as a minimum – be collecting temperature data from at least one point every 3m to 9m of rack aisle. ASHRAE also suggests that unless components have their own dedicated thermal sensors, there’s realistically no way to stay within target thermal limits.
“ASHRAE’s recommendations speak directly to the risks that data centre operators face from non-compliance, and almost all operators use this as their stated standard. Our own research reveals that 11% of IT racks in the 128 data centre halls we surveyed were actually outside of ASHRAE’s recommended range of an 18-27º C recommended rack inlet temperature - even though this range was the agreed performance window that clients were working towards. We also found that 78% of data centres had at least one server rack that lay outside that range – effectively taking their data centre outside of thermal compliance,” explained James Kirkwood, EkkoSense’s Head of Critical Services.
“Unfortunately the problem for the majority of data centre operators that only monitor general data centre room/aisle temperatures is that average measurements don’t identify hot and cold spots. Without a more precise thermal monitoring strategy and the technologies to support it, organisations will always remain at risk – and ASHRAE non-compliant – from individual racks that lie outside the recommended range. That’s why the introduction of the latest generation of Internet of Things-enabled temperature sensors – introduced since the initial publication of ASHRAE’s report – is likely to prove instrumental in helping organisations to cost-effectively resolve their non-compliance issues,” continued James.
This latest EkkoSense research follows on from recent findings that suggested the current average cooling utilisation level for UK data centres is just 34%. According to James Kirkwood: “our research shows that less than 5% of data centres are actively monitoring and reporting individual rack temperatures and their compliance. The result is that they therefore have no way of knowing if they are actually truly compliant – and that’s a major concern when it comes to data centre risk management.”
Given that UK data centre operators continue to invest significantly in expensive cooling equipment, EkkoSense suggests that the cause of ASHRAE non-compliance is not one of limited cooling capacity but rather the poor management of airflow and cooling strategies. EkkoSense directly addresses this issue by combining innovative software and sensors to help data centres gain a true real-time perspective through the modelling, visualisation and monitoring of thermal performance. Using the latest 3D visualisation techniques and real-time inputs from Internet of Things (IoT) sensors, EkkoSense is able – for the first time – to provide data centre operators with an intuitive, 3D real-time of their data centre environment’s physical and thermal dynamics.
Research reveals the secrets of essential DC Infrastructure.
Tony Lock, Director of Engagement and Distinguished Analyst, Freeform Dynamics Ltd, August 2017
There are very few organisations now where the demands and expectations placed on IT services don’t increase daily. This creates great pressure on the datacentre, and not just on the server and storage estates: a recent report (http://www.freeformdynamics.com/fullarticle.asp?aid=1955) by Freeform Dynamics clearly shows similar demands impacting the core power, cooling and support infrastructure. So how well are datacentre managers dealing with these pressures?
Unsurprisingly our report indicates very clearly that a significant majority of respondents acknowledge that the effective management of datacentre facilities is important, if not vital, to ensure business continuity. (Figure 1.)
The research indicates that business operations now depend on IT to such a degree that many organisations recognise that they absolutely must improve the availability of such systems. Virtualisation, “cloud solutions” and significant improvements to workload management have had a positive impact, but they all fundamentally rely on the underlying power and facilities systems: unless these are ready to support continuous operations without service interruption, all that hard work at the IT level could be rendered redundant.
Which makes it all the more surprising that many organisations still experience IT service interruptions because of problems in the facilities infrastructure that supports datacentre operations. Indeed, around a third of respondents said that they have experienced “disruptive” levels of downtime because of facilities-related events within the past three months. (Figure 2.)
The fact that outages still take place regularly, despite investments in IT continuing apace, is a problem for businesses of all types and size, but more particularly for the datacentre managers who must keep things running smoothly. And this challenge is becoming ever more critical and visible, as business demands and expectations continue to ramp up the pressure on many components in the power and cooling infrastructure (Figure 3.).
The vast majority of respondents indicated that they see considerable, and growing challenges in everything, from the supply of energy to the datacentre, through handling expanding cooling requirements, to managing power distribution within the DC. These challenges often go hand in hand with the pressure to manage and reduce datacentre-related costs and charges.
Elements that in previous years may have fallen into the “nice if you could do something about it” category, such as reducing the carbon footprint and improving datacentre PUE are now a significant challenge for around a third of DC managers, and a growing concern for many others. It is apparent that after many years of green drivers being largely theoretical, accounting is now making them real.
As the pressure on core datacentre facilities increases, the survey results indicate that a clear majority believe that, at the very least, power and cooling facilities need to be strengthened. Even greater numbers accept that they should improve the monitoring and management tools used in the running of these systems. (Figure 4.)
While many organisations acknowledge that they need to strengthen their datacentre infrastructure, the research also indicates that considerable numbers also need to better understand the options now available to them, especially in terms of power management. However, it can easily be argued that even those organisations that feel they are in a stronger position here will also need to invest, as the evolution of power and infrastructure management technologies continues.
This is summed up very clearly when we look at the state of existing facilities resilience and recovery systems, where the report shows that around one in seven are generally falling short or have no DR capabilities whatsoever. (Figure 5)
These numbers are reflected in the survey when looking at levels of confidence in a range of existing capabilities. Over 60 percent of those surveyed reported they have at best only ‘partial’ confidence that power management systems are well designed, that they have access to the skills and expertise they need or that they can recover quickly in the event of power-related incidents and events. But perhaps the most worrying response, although by no means unexpected, is that only a third of those surveyed are fully confident they have adequately tested their ability to deal with power related failures.
No one doubts that most businesses depend on their IT systems, and that these in turn rely on the underlying datacentre power, cooling and other support facilities. Yet the evidence is that this critical foundation does not receive the attention and investment needed to ensure it can support an expanding range of business services. As in all areas of IT, and in business more generally, getting hold of people with the skills and experience needed to ensure things operate smoothly is a challenge for many. Perhaps only one task is more daunting: getting the time to test recovery processes and make sure they work. It’s a common challenge, but one that has the potential to deliver huge value. Getting business buy-in to adequate testing of recovery capabilities is essential. To get hold of the report, please visit (http://www.freeformdynamics.com/fullarticle.asp?aid=1955).
Angel Business Communications is pleased to announce the categories for the SVC Awards 2017 - celebrating excellence in Storage, Cloud and Digitalisation. The 30 categories offer a wide-range of options for organisations involved in the IT industry to participate. Nomination is free of charge and must be made online at www.svcawards.com
Infrastructure
Cloud
By Steve Hone, DCA CEO and Cofounder of the Data Centre Trade Association
The DCA summer edition of
the DCA journal focuses on Research and Development and I’m pleased to say we
have received some great articles this month. Research leads to increased
knowledge and the ability to develop and innovate, the benefits of this
investment was plain to see in July in Manchester at the DCA annual conference.
The DCA’s update seminar on the 10th was not only an opportunity to bring DCA members up to speed with the work undertaken to date but also to share the plans for the future in its continued support of members and the data centre sector.
The Seminar also provided an opportunity for members to gain updates from some of the DCA’s strategic Partners including Simon Allen who spoke about the new DCiRN (Data Centre Incident Reporting Network) and Emma Fryer who provided an update on the valuable work Tech UK do in supporting of the data centre sector, this was followed by networking drinks in the evening.
On the 11th July the DCA hosted its 7th Annual Conference which took place at Manchester University. Data Centre Transformation Conference 2017 organised in association with DCS and Angel Business Communications was a huge success and continues to go from strength to strength.
The quality of content and healthy debate which took place in all sessions was testament as to just how well run the workshops were; So, I would also like to say a big thank you to all the chairs, workshop sponsors and the committee who worked so hard to ensure the sessions were interactive, lively and educational.
The workshop topics covered subject matter from across the entire DC sector, however research and development continued to feature strongly in many of the sessions which is not surprising given the speed of change we are having to contend with as the demand for digital services continues to grow.
Having seen the feedback sheets from all the attending delegates it was clear that a huge amount was gained from the day, not just in respect of contacts and knowledge but also the insight gained from speaking too and listening to others who share the same issues and same business challenges. One delegate said it was “refreshing to come to an event where he felt comfortable enough to speak out and learn on his own terms, without feeling we was being sold too”. High praise indeed so thankyou to all the delegates who attended and helped make the day such a success.
We closed the conference with a sit-down dinner in the evening with good food & wine served by the university students and of course great company which for some meant we were still out to watch the sun come up!
Although some will be taking the opportunity to slip away to recharge their batteries; you still have time to submit articles for the DCM buyers guide; the theme is “Resilience and Availability”;
Copy deadline date for this is the 20th August. There is also still space for copy in the next edition of the DCA journal with a theme of Smart Cities, IOT and Cloud which always seems to be a popular subject matter, copy deadline for this is the 12th September. Please forward all articles to Amanda McFarlane (amandam@datacentrealliance.org) and please call if you have any questions.
By Dr Jon Summers, University of Leeds, July 2017
For the last six years at the University of Leeds a group of researchers in the Schools of Mechanical Engineering and Computing have been trying to deal with the extremely complex question of how to manage simple thermodynamics and fluid flows derived from very complex digital workloads.
It is a question that does now need to be addressed and this can only be achieved using real live Datacom equipment – servers, switches and storage. Data centres should really be considered as integrated, holistic systems, where their prime function is to facilitate uninterrupted digital services. Would you go out and purchase a car with all of its technology of aerodynamics, road handling, crashworthiness and engine management without an ENGINE, which you as the driver would need to specify and source for engine size, shape, capacity and performance? No you would not, they are integrated systems. The same is true of data centres – the engine of the car is equivalent to the datacom, which is key to the provision of the intended function of the system. That is why research and development around providing the facility for the datacom should actually use real live datacom.
The group at the University of Leeds have in the past received funding from data centres, namely Digiplex and aql to develop experimental setups that can lead to a better understanding of the Datacom operation and performance with differing thermal and fluid flow scenarios. With the involvement of Digiplex we constructed a large data centre cube with a hot and cold aisle arrangement and have run this with live Datacom and some Hillstone loadbanks, the latter has been shown as a 4U proxy to replicate (with one fan and one heater) 4 times 1U pizza box servers operating at full capacity. The data centre cube is shown in Figure 1 and to augment this activity of thermal management of real Datacom in a live data centre environment, a generic server wind tunnel supported by the Leeds based data centre and hosting company, aql. The wind tunnel offers finer control on the thermal and airflow aspects of management Datacom and is shown in Figure 2.
Figure 1: Left shows a front view of the cube with a standard wind tunnel connected to the left of the cube. Right highlights the connection of the wind tunnel exhaust to the inlet to the cold aisle of the data centre cube.
The generic server wind tunnel offers the capability to test Datacom equipment at different inlet temperatures and humidity, although the latter is not easily controlled. The equipment has enabled the team to look at Datacom performance in terms of power requirements when the facility fan pressurises the cold aisle. Both the cube and the wind tunnel have helped to look at the effects of pressure and airflow on the Datacom delta temperature between front and back.
Figure 2: Left shows the exit of the generic server wind tunnel. Right shows the full extent of the wind tunnel with the working cross section.
The work at Leeds has come to the attention of the nearly 2 year old Data Centre research and development group that operates as part of Government Research Institutes in Sweden, namely SICS North, under the leadership of Tor Bjorn Minde. We are now forging a strong collaboration between SICS North and the University of Leeds with the exchange of expertise with my taking up a study leave for two years at SICS North, where we will continue to grow integrated and holistic data centre research and development using live Datacom.
Figure 3 shows the two new data centre pods with real Datacom available for a number of research projects around data centre control, operation and performance. The figure shows SICS ICE module 1 to the left, which houses the world’s first open Hadoop-based research data centre offering the capability to do open big data research. Module 2 to the right is a flexible lab with 10 rack much like the cube but with additional functionalities.
Figure 3: The two data centre pods at SICS North, Sweden, with SICS ICE on the left housing the open Hadoop-based research data centre.
By combining the expertise at Leeds on thermal and energy management of a myriad of Datacom systems with the data centre operational capabilities at SICS North, we anticipate to be able to offer a stronger understanding of the integration of the Datacom with the Data Centre facilities at a time of great need.
Acknowledgements
I would like to acknowledge the contribution of PhD student Morgan Tatchell-Evans and my colleague Professor Nik Kapur for the design and construction of the data centre cube with kind support from Digiplex and PhD student Daniel Burdett for the design and construction of the generic server wind tunnel and generous support from AQL.
Dr. Jon Summers is a senior lecturer in the School of Mechanical Engineering at Leeds. During the last 20 years, he has worked on a number of government and industry funding projects which have required different levels of computational modelling. Since 1998, having built and managed compute clusters to support many research projects, Jon now chairs the High-Performance Computing User Group at Leeds University and is no stranger to high performance computing having developed software that uses parallel computation. Applications of his modelling skills have led to publications in the areas of process engineering, tribology, through to bioengineering and as diverse as dinosaur extinction. In the last three to four years Jon’s research has focussed on a range of air flow and thermal management and energy efficiency projects within the Data Centre, HVAC and industrial sectors.
By Mark Fenton, Product Manager at Future Facilities
When one of our developers, Bo Xia at Future Facilities HQ put on the Oculus Rift headset for the first time, we were skeptical about what his reaction would be.
To the rest of the team watching from the real world, it was a curious scene: Bo was standing in our office strapped into a VR headset, moving his head and arms around wildly. But Bo had been transported and was now completely immersed in one of our data centre models—walking down the aisles, looking at live power consumption and watching simulated airflows. He was experiencing for the first time a fully-immersive data centre simulation. He took off the headset and delivered his verdict with a huge grin: “Amazing!”.
Everyone we have delivered the Rift experience to has had this reaction. Often, this has been their first experience of VR and so has come with a healthy level of skepticism and even trepidation towards the technology. What is amazing is watching how quickly that melts away once they are transported to a rooftop chiller plant or back in time to an IBM mainframe facility. Once immersed, there is full freedom to explore the data centre as you please. Walk an aisle, fly through the duct system, watch airflows or engage with any asset of interest. You quickly forget the limitations of being human and fly up to get a bird’s eye view before diving into the internals of a cabinet.
This fully-flexible experience may be the foundation of almost unlimited opportunities for our data centre ecosystem: designers walking clients around their concepts, colocation providers selling a proposed cage layout, upper management touring their investment, facility engineers troubleshooting their own sites and much more. It’s clear that VR will not only change the way we visualise our data centres but more excitingly, it will change the way we work with them as well.
For operational sites, VR will naturally progress to AR (augmented reality), where performance data can be overlaid onto the real world. Imagine walking through your data centre, putting on your AR glasses and superimposing live DCIM data or simulation results directly onto your view. With human error causing the largest percentage of data centre outages, AR could be invaluable in training and assisting site staff to ensure fewer mistakes are made.
When looking at cooling performance, site staff could visualise the airflow around overheating devices to fully understand the thermal environment - and then interactively make improvements. In addition, IT and Facilities could use this technology to proactively visualise their next deployments, a maintenance schedule or even a worst-case failure. VR offers a fully-immersed testbed, where you can experience first-hand the engineering impact of any data centre change you’re planning to make.
So what about when you can’t physically walk around the data hall floor? With the rapid growth of IoT and edge computing, there is a drive towards smaller local facilities that provide low-latency connectivity between users and their cloud requirements. From autonomous cars to the next Pokemon Go, there is an exponentially-increasing volume of data being produced, and an unwavering pursuit towards faster connectivity to make use of it.
This trend towards larger cloud data centres supported by a discretized network of hundreds - or even thousands - of remote edge sites will be a significant management challenge. This lends itself beautifully to the VR world: VR provides remote operators the tools to assess alarms and find faults, then make adjustments to mitigate the risk of downtime - all from the comfort of their chair. This concept was demonstrated by Vapor IO at the recent Open19 launch: they showed a Vapor chamber being used in a remote edge location, streaming live data from OpenDCRE and simulation airflow from our own 6SigmaDCX software.
The future of data centres is embarking on an exciting journey towards higher demand, local edge connectivity and a fully-connected IoT world. Engineering simulation techniques will ensure these sites can deliver the highest number of applications with the lowest energy spend - all with no risk to downtime. Combining the power of simulation with VR will allow data centre professionals to engage and immerse themselves in their remote environments and, for the first time, truly understand the impact of any change they wish to make. VR certainly has the ‘wow’ factor, but it is becoming increasingly clear the technology will also provide a huge benefit to the running and optimising of the next generation of data centres.
By Professor Xudong Zhao, Director of Research, School of Engineering and Computer Science University of Hull
It is universally acknowledged that the cooling systems consume 30% to 40% of energy delivered into the Computing & Data Centres (CDCs), while electricity use in CDCs represents 1.3% of the world total energy consumption. The traditional vapour compression cooling systems for CDCs are neither energy efficient nor environmentally friendly.
Several alternative cooling systems, e.g., adsorption, ejector, and evaporative types, have certain level of energy saving potential but exhibit some inherent problems that have restricted their wide applications in CDCs.
One of the most promising directions is the application of the dew point cooling system, which has been widely used in other industrial fields potentially has the highest efficiency (20-22 electricity based COP) over other cooling systems if designed properly.
To promote its application in CDCs, an international and inter-sectoral research team, led by University of Hull and supported by the DCA Data Centre Trade Association, has been formed to work on a joint EU Horizon 2020 research and innovation programme dedicated to developing the design theory, computerised tool and technology prototypes for a novel CDC dew point cooling system. Such a system, included critical and highly innovative components (i.e., dew point air cooler, adsorbent sorption/regeneration cycle, microchannel loop-heat-pipe (MCLHP) based CDC heat recovery system, paraffin/expanded-graphite based heat storage/ exchanger, and internet-based intelligent monitoring and control system), it is expected to achieve 60% to 90% of electrical energy saving and is expected to have a comparable initial price to traditional CDC air conditioning systems, thus removing the above outstanding problems remaining with existing CDC cooling systems.
Five major parts in the innovated system, as shown in Fig. 1, are being jointly developed by several organizations of the research team, including:
(1) a unique high-performance dew point air cooler;
(2) an energy efficient solar and (or) CDC-waste-heat driven adsorbent sorption/desorption cycle containing a sorption bed for air dehumidification and a desorption bed for adsorbent regeneration; both are functionally alternative;
(3) a high efficiency micro-channels-loop-heat-pipe (MCLHP) based CDC heat recovery system;
(4) a high-performance heat storage/exchanger unit; and
(5) internet-based intelligent monitoring and control system.
Fig. 1 Schematic of the CDC dew point cooling system
During operation, mixture of the return and fresh air will be pre-treated within the sorption bed (part of the sorption/desorption cycle), which will create a lower and stabilised humidity ratio in the air, thus increasing its cooling potential. This part of air will be delivered into the dew point air cooler. Within the cooler, part of the air will be cooled to a temperature approaching the dew point of its inlet state and delivered to the CDC spaces for indoor cooling. Meanwhile, the remainder air will receive the heat transported from the product air and absorb the evaporated moisture from the wet channel surfaces, thus becoming hot and saturated and being discharged to the atmosphere.
As the adsorbent regeneration process requires significant amounts of heat while the CDC data processing (or computing) equipment generate heat constantly, a micro-channels-loop-heat pipe (MCLHP) based CDC heat recovery system will be implemented. Within the system, the evaporation part of the MCLHP will be stuck to the enclosure of the data processing (or computing) equipment to absorb the heat dissipated from the equipment, while the absorbed heat will be released to a dedicated heat storage/exchanger via the condenser of the MCLHP.
The regeneration air will be directed through the heat storage/exchanger, taking away the heat and transferring the heat to the desorption bed for adsorbent regeneration, while the paraffin/expanded-graphite within the storage/exchanger will act as the heat balance element that stores or releases heat intermittently to match the heat required by the regeneration air. It should be noted that the heat collected from the CDC equipment and (or) from solar radiation will be jointly or independently applied to the adsorbent regeneration, while the system operation will be managed by an internet-based intelligent monitoring and control system.
This super high performance has been validated by simulation and the prototype experiment carried out in Hull and other partners’ laboratories. The coefficient of performance (COP) of the proposed dew point cooling system reaches as high as 37.4 in ideal weather condition, while the average COP of traditional cooling system is around 3.0. The tested performance of the new system at various climatic conditions are depicted in Fig. 2.
Fig.2 Performance of the super performance dew point cooler at various climatic condition.
The dynamic simulation was also carried out under UK (London) climate for 4 scale type of CDCs (i.e. small, medium, large, super) and 5 application scenarios (room space level, row level, rack level, server level). The result shows dramatic annual electricity saving compare to the reference cases with traditional cooling plans, especially for the application at server level cooling. The annual energy consumption comparison for large scale CDC is provided as an example shown in Fig 3. The results also show that the bigger the CDC`s scale the more electricity would be saved by applying the super dew point air conditioning system.
The estimated annual electricity saving for the reference Data centres in UK were:
Fig.3 The annual energy consumption comparison of tradition and new cooling system for large scale CDC at various application scenario.
To summarise, the development, test and demonstration of the innovative super performance dew point cooling system for CDCs are going to be completed by 2020. The wide application of such high-performance cooling system will overcome the difficulties remaining with existing cooling systems, thus achieving significantly improved energy efficiency, enabling the low-carbon operation and realizing the green dream in CDCs.
Professor Xudong Zhao, BEng. MSc. DPhil. CEng. MCIBSE, is a Director of Research and Chair Professor at the School of Engineering and Computer Science, University of Hull (UK), and has enjoyed the global reputation as a distinguished academia in the areas of sustainable building services, renewable energy, and energy efficiency technologies. Over the 30 years of professional career, he has led or participated in 54 research projects funded by EU, EPSRC, Royal Society, Innovate-UK, China Ministry of Science and Technology and industry with accumulated fund value in excess of £14 million, 40 engineering consultancy projects worth £5 million, and claimed five patents. Up to date, he has supervised 24 PhD students and 14 postdoctoral research fellows, published 150 peer-reviewed papers in high impact journals and referred conferences; involved authorization of three books, chaired, organized, gave keynote (invited) speeches in 20 international conferences.
By Matthew Philo Product Manager – Denco Happel
The data centre industry continues to develop and innovate at a pace like no other, but this does not change core principles for the operations managers; they want simplicity and efficiency, but never at the cost of reliability. Matthew Philo, Denco Happel’s Product Manager CRAC, explains why these principles were central during the testing and development of their new free cooling solution.
As energy costs continue to rise, data centre owners and managers are looking for ways to reduce the amount of energy used by both IT and supporting infrastructure. Considering that the energy used for climate control and UPS systems can be around 40% of a data centre's total energy consumption1, efficient cooling systems can significantly cut carbon footprints and energy bills. Over the past year, we have been looking at a new way of combining free cooling with the reliability of mechanical cooling technology to help IT managers improve their Data Centre Infrastructure Efficiency (DCiE) and Power Usage Effectiveness (PUE).
The existing Multi-DENCO® range was taken as a starting point, which had introduced inverter compressors so that we could match heat rejection exactly to the room requirements. Whilst a data centre operates in a 24/7 environment, the cooling requirements vary throughout the day and across different seasons. This means that many units spent most of their life in part-load conditions, below 100% output.
It therefore made sense to increase efficiency at lower conditions to reap the biggest benefits. When we incorporated variable technology, such as EC fans and inverter compressors into our refrigerant-based, direct expansion (DX) Multi-DENCO® solution, it provided the opportunity to reduce energy consumption because it benefits from the ‘cube root’ principle. A good rule of thumb is a 20% reduction in speed will give a 50% reduction in energy consumption. This means if you can operate 80%, rather than 100%, very quickly you see your energy consumption halving.
However, this did not mean our progress on efficiency had finished. We realised that we could deliver further energy savings by exploiting the variability of the outdoor environment - in particular when it gets colder.
We knew that we needed to keep the full DX circuit within our design, to give the reliability that was required by our customers. But a refrigeration circuit does not benefit greatly from colder weather, so we focused on using indirect free cooling to provide suitably cold water to the indoor unit.
Outside of peak summer temperatures, this water circuit would reduce or remove the need for mechanical cooling (i.e. a direct expansion circuit), and the Multi-DENCO® F-Version was born.
In typical indoor conditions, 100% of the cooling requirements could be provided by the free-cooling circuit up to an outdoor temperature of 10°C. If the unit is operating in part-load conditions, it can continue to fully meet a datacentre’s cooling needs beyond this temperature, which means that the DX circuit’s compressor can be switched off for longer to save energy. To maintain the unit’s reliability, the free-cooling water circuit was kept separate to ensure that the DX circuit could operate independently if it was needed to fully meet the cooling load. A new EC water pump was chosen to give variable control and deliver the same energy-saving benefits offered by other models in the Multi-DENCO® range.
Whilst the benefits of 100% free cooling are easy to understand, the significant advantages of the mixed-mode operation can be easily overlooked. Mixed-mode, where both the free cooling and the direct expansion operate simultaneously, can be until 5 degrees below the indoor environment’s set point (for example a set point of 30°C would allow mix-mode until 25°C), which in Europe can be a large percentage of the year.
During those many hours of mix-mode operation, the ‘cube root’ principle is being exploited. The free cooling circuit may only be contributing a small percentage of the cooling, but it is also reducing the operating condition of the direct expansion circuit. As mentioned early, if the free cool circuit can provide 20% of what is required, then this is 20% less for the direct expansion circuit. This means the direct expansion circuit will save 50% in energy consumption when the unit is in mix-mode.
Energy consumption continues to be a key factor in the operational cost of a data centre. As datacentres come under mounting pressure to increase performance, managers want to take advantage of any efficiency options available. By combining the reliability of a direct expansion circuit with a simple indirect free cooling circuit, energy efficiency can be improved without risking interruptions to a datacentre’s critical operations.
For more information on DencoHappel’s Multi-DENCO® range, please visit http://www.dencohappel.com/en-GB/products/air-treatment-systems/close-control/multi-denco
A recent document published by the European Patent Office (EPO) includes a graph which claims to be “measuring inventiveness” of the world’s leading economies using the ratio of European patent filings to population[1].
The data, reproduced in the graph (top right), shows the number of European patent filings per million inhabitants in 2015. Switzerland comes out on top, with 873 applications per million inhabitants, whilst the UK sits 16th on the list with only 79 applications per million inhabitants. This means that Switzerland has over ten times as many European patent filings as the UK, per million inhabitants.
Additional data, provided by the World Intellectual Property Organisation (WIPO)[2], shows resident patent filings per £100bn GDP for the last 10 years - see the graph (bottom right). The UK is at the bottom of the pile, flat-lining at only about one filing per £100m GDP. In 2015, the USA beat the UK by a factor of about two and Korea beat the UK by a factor of over ten.
These graphs show slightly different things. One shows European patent filings, the other shows resident patent filings (i.e. filing in a resident’s “home” patent office). However they both make the same point loud and clear - UK companies file significantly fewer patent applications, in relative terms, than their competitors in other countries.
What is less clear is why the numbers are so low. Broadly speaking, there are two possible explanations.
One is that the UK really is less inventive than the rest of the world - as the EPO graph would have you believe. We would like to think that’s not true - the UK is renowned in the world of innovation, with UK inventors famously having invented the telephone, the world wide web, and recently even the holographic television, to name but a few.
A more plausible explanation is that the UK has a different patent filing “culture”, which originates from a number of factors:
£24bn takeover was the biggest ever tech deal in the UK, and the majority of that value can be attributed to ARM’s patent portfolio.
So the reasons are many and varied, but the message to UK companies is clear: your international competitors are likely to be filing more patents than you, and you need a strategy that takes this into account. This might involve filing more patent applications, or simply becoming more aware of your competitors’ patent portfolios.
Withers & Rogers is one of the leading intellectual property law firms in the UK and Europe. They offer a free introductory meeting or telephone conversation to companies that need counsel on matters relating to patents, trademarks, designs and strategic IP. For more information call 020 7940 3600 or visit www.withersrogers.com
[1] EPO Facts and figures 2016, page 15
Jim Ribeiro
Matthew Pennington
These are interesting times for Six Degrees Group. Launched in 2011, its strategy was to combine the capabilities offered by a true data centre and converged network operator with the flexibility and service-creation skills of an agile startup.
With a focus on mid-market customers, the company has grown to a £100m business. Highly acquisitive over its first five years, buying 19 businesses, original owner Penta sold 6DG to new backers, Charlesbank Capital, a Boston and New York-based investment business with more than $3bn of assets under management in 2015.
Following the change of ownership, Six Degrees has more recently undergone a change in leadership with the appointment of David Howson as CEO in February this year. Upon his appointment, Howson who previously held senior positions at Zayo and Level 3 Communications, said he was looking forward to “helping Six Degrees establish itself as the market leader for mission-critical managed services and to drive growth both organically and through strategic acquisitions”.
Howson spent his first 60 days looking at the company’s capabilities, strengths and opportunities and to shape his view on strategy and what had already been put in place by the company’s founders. He also talked to many customers and clients. “I have spent a lot of my career in front of customers and I used that experience heavily in this process,” he reveals.
One of the first things Howson has done is to change the organisational structure of 6DG to align with three key areas: advanced solutions, converged solutions and partner solutions. These key business units are backed by two platform units, Network Services and Platform Services, and the company has consolidated its product offerings under the Six Degrees Group brand.
The reorganisation comes on top of a major investment in the internal platform using ServiceNow for customer experience and Tableau for live management information. Howson says the ServiceNow investment is already delivering value and will provide the foundation to significantly scale the business.
One of the company’s aims this year is to combine the capabilities from its 19 acquisitions into a converged offering for the mid-market and develop a set of specialisms that are better matched to clients to drive higher growth.
“The key is alignment,” Howson says. “We are integrating the five standalone companies. We are making sure all our services are available to all our clients.” He believes that the company’s strengths in individual capabilities should put it in an even stronger position when it converges them for customers.
Howson is keen to stress that the company will not deviate from its commitment to owning its own infrastructure. “It’s about delivering on customer SLAs and that is the main reason why we own our infrastructure.”
In terms of verticals, Six Degrees is particularly strong in financial services and retail, but the company plans to become more heavily involved in other sectors going forward. When it comes to operating in other countries, Howson stresses that the company intends to continue its focus on the UK market for now but it can deliver international capabilities for UK customers with overseas needs if they want to deal with a single converged provider.
After making 19 acquisitions in five years, Howson says Six Degrees is concentrating on “getting to grips with the acquisitions”. There are no immediate plans for more M&A activity in the near future, “We are internally and customer focused at present, but are always seeking inorganic opportunities that create additional value,” he adds.
For now, most of the focus is on bedding in the new operational model and ensuring top line and EBITDA growth. “Going forward, we aim to make sure all parts of the business are growing, although some will grow faster than others,” Howson states. “The plan is to grow revenue and EBITDA.”
The modular UPS sector is on the rise, presenting uncapped opportunity for business. Leo Craig, general manager of Riello UPS, offers his guidance on the action businesses should be taking to maximise modular scale-up.
The upward trajectory of the global modular UPS market shows no signs of abating. According to a report published last year by global research body, Frost & Sullivan, the market is expected to grow twice as fast as the traditional UPS market (forecast period 2015 – 2020), with a general acceleration in growth predicted post-2017.
Data centres continue to dominate market revenue in the modular UPS sector. Thanks to the internet of things, smart devices are fuelling huge demand on data centres and this is only going to increase. For data centres needing to achieve rapid expansion in order to keep pace with demand for increased processing, the modular UPS provides a raft of benefits.
Modular UPS solutions, which can be scaled up in tandem with the growing demands of a business- removing the risk of oversizing a UPS unnecessarily at the outset, offer the maximum in availability, scalability, reliability and serviceability whilst also ensuring high efficiency, low cost of ownership and a high-power density. And, whilst modular systems can be scaled up to meet increased demand, data centres can easily switch modules off too, guarding against under-utilisation. The modular UPS also addresses the issue of limited floorspace, which is increasingly an issue for data centres. Modular component UPS systems can be expanded vertically, provided there is room within the existing cabinet for additional UPS modules, or horizontally with the addition of a further UPS cabinet.
When it comes to maintenance, Modular UPS systems are marginally easier to service and repair in situ than a standalone UPS system because a failed UPS module can be ‘hot-swapped’. The failure or suspect module is then returned to a service centre for investigation. To return a standalone UPS system to active service may require a board swap.
Maintenance, of course, is a hot topic currently – in the wake of the UPS-related issues experienced by British Airways earlier this year, which had disastrous consequences for the business. As this example shows, the way in which maintenance is carried out needs to be carefully considered, whether you choose to implement a modular or centralised UPS system.
Human error is the main cause of problems occurring during maintenance procedures; engineers may throw a wrong switch, or carry out a procedure in the wrong order. But, whilst it might be easy to lay blame solely at the feet of the engineer in these instances, errors of this kind are often the result of poor operational procedures, poor labelling or even poor training. By ironing out these areas right at the start of the UPS installation, risks can be avoided.
For example, if the solution being deployed is a critical system comprising large UPS’s in parallel and a complex switchgear panel, castel interlocks should be incorporated into the design. Castel interlocks force the user to switch in a controlled and safe fashion, but are often left out of the design to save costs at the start of the project. This is a common occurrence and the client could pay dearly in the future if a switching error occurs.
Simple things can make a difference. By ensuring that basic labelling and switching schematics are up-to-date, disaster can be averted. Having clearly documented switching procedures available is recommended. If the site is extremely critical, the procedure of Pilot - Co Pilot (two engineers both check the procedure before carrying out each action) will prevent most human errors.
Any maintenance is typically intrusive into the UPS or switchgear, so managing this carefully is vital. Most problems that occur, including the failure of electrical components, are proceeded with an increase in heat. If a connect point isn’t tightened properly, for example, it will start to heat up and eventually fail in some way. Short of checking every connection physically, the most effective solution is thermal imaging. Thermal image cameras are relatively cost effective and easy to use these days, making them a worthwhile investment. Thermal image technology can identify potential issues that wouldn’t necessarily be picked up using conventional techniques, without the need of physical intervention.
Round-the-clock equipment monitoring also offers robust protection and should be part of the maintenance package, as UPS’s will alarm if any parameter of their operation is wrong – if an increase in heat, a fan failure or a problem with the batteries is detected, for example. It is highly unlikely that UPS failure will be limited to times when the engineer is carrying out the annual maintenance visit, so constant monitoring is critical.
Rigorous training is also vital and, to protect themselves, clients must ensure that the attending engineer is certified to carry out the work. It is the responsibility of the client to ask the maintenance company for proof of competency levels – pertaining both to the company itself and to the engineers it uses. Risk averse clients should also check ‘on the day’ that the engineer on site is competent and isn’t, for instance, a last-minute sub-contractor sent in because the original engineer is off sick.
A strong maintenance package should also ensure that when the UPS does fail, the response is timely and effective. Service level agreements need to be appropriate to the criticality of the application. There is no point having a maintenance contract for a UPS 24/7 response if access to the UPS can only be gained during normal business hours. Transversely, if operations are 24/7 and very critical to the business, then 24/7 response is a must.
Caution should be applied wherever maintenance contracts seem too good to be true – can a two-hour response really be guaranteed, for instance? Anyone who drives on the M25 might question this! It is also worth checking exactly what constitutes the ‘response’ - will it just be a phone call or will it be someone coming to site, and, if so, will that someone be a competent engineer? It’s important to pay attention to the guaranteed fix time too as it doesn’t matter how quickly an engineer arrives on site if the problem then takes a week to fix because of parts being delayed and so on.
Finally, if the UPS can’t be fixed with a certain timescale you need to understand what your course of redress is; will the UPS be replaced and so forth?
Maintenance continues to be a key concern for any business investing in a UPS – be that modular or stand-alone. Ease of maintenance is, no doubt, one of the differentiators helping to drive growth in the modular UPS market but, whatever UPS product businesses select, it is essential they apply proper due diligence to their maintenance approach. Watertight maintenance processes and procedures should be in place and relevant documentation must be easily and readily available. As well as ensuring that switches cannot be thrown by accident, businesses need to check that engineers are competent and should study the SLAs in maintenance agreements. By adding technologies like thermal imaging into the maintenance mix they will help to reduce the likelihood of issues further. Stringent maintenance processes should be the constant factor in an ever-evolving market.
A recent survey by Gartner predicted that ‘8.4 billion connected things will be in use in 2017’, an increase of 31 percent from 2016. Gartner also predicted that this figure will reach 20.4 billion by 2020. This just shows some indication of how quickly the IoT is growing, and will continue to grow in the future.
By Mike Kelly, CTO at Blue Medora.
The IoT is becoming more and more naturally integrated into daily life, and with this comes big opportunities for organisations. This increase in data means that businesses can gain the potential for both insight and competitive advantage. These benefits include analytics, new marketing strategies and operational productivities.
However, for businesses to transform this opportunity into revenue, the data needs to be securely analysed, stored and shared. In order to manage all of this data, databases have become much more complex, and consequently, IT teams are struggling with the burden of having to manage so much. Many IT professionals have to rely on disparate tools and out-of-date equipment to manage their database infrastructures, resulting in complication and inadequacies.
When working with the IoT, one of the most vital components is database technology, and because of this, it too has been rapidly developing. The transition from traditional SQL databases, to NoSQL, Open Source, Big Data, and cloud databases has come about fairly quickly, not including the swift adoption of both cloud infrastructure and virtualisation.
All of this combined makes for rapid evolution in the IT world, and it is this advancement in database space that allows the data analytics from IoT and Big Data to be used by many organisations and corporations around the world. It also aids the development processes used by IT teams that help the businesses perform highly.
It is this rapid growth of database technology that has caused database monitoring and management technology to fall decades behind. For the majority, today’s database monitoring capabilities are still aimed at on-premises, bare metal, traditional SQL world – nowhere near matching the necessities that enable businesses to make the most out of analysing their IoT data.
The essential tool for analysing data is a database monitoring system that can enable a comprehensive view of the data stack. This will mean that IT teams can easily understand their data, and the complex functions that are going on in their environments. If a business can effectively monitor their database layers to optimise their peak performance, as well as resolving any bottlenecks that may occur from the huge amount of data that can come from IoT devices, it will be in a far superior position than other organisations, as it will be able to use its collected data efficiently and expertly.
The Internet of Things has already begun to change how businesses perceive and use data. Yet IT teams are unable to analyse and understand their database infrastructures due to the vast amounts of data that is constantly being collected. IoT data will unfortunately make IT infrastructures and databases considerably harder to manage, and more complex. To tackle this, IT reams need to make sure they have a database monitoring system and IT management tools in place, in order to enable full visibility, reduce network complexity and to spot any underlying problems before they become issues. This way, businesses can make the most out of IoT data.
Dr. Alex Mardapittas, managing director of leading energy storage and voltage optimisation brand Powerstar, discusses why a battery-based energy storage solution provides a more modern approach to Uninterruptable Power Supply (UPS) and highlights the range of benefits the technology can provide for businesses looking to not only secure the supply for critical systems and data centres but also reduce energy costs.
We are living in a digital age, where an increasing number of items are connected to the ‘Internet of Things’ (IoT) and consumers are both consciously and unconsciously providing companies with a significant amount of personal data. With a greater number of public and private sector businesses placing more emphasis on ‘big data’ in regard to how it can produce vital statistics and potentially drive sales, it is perhaps obvious that the number of data centres is on the rise.
With increasing levels of data, come not only potential security issues, but also power problems. Recent information produced by the Payment Card Industry Security Standard Council (PCI SSC) has warned UK businesses they could face up to £122bn in penalties for data breaches. Alongside this, as the scale of data and the cost of electricity from both the National Grid and energy providers increases, it is predicted data centres will require three times as much energy as they do now in the coming decade, with all the financial implications this entails. With this in mind, it will be essential to have engineered energy solutions installed that ensure a constant, stable supply whilst reducing electricity consumption and CO2 emissions.
As recently documented in the national news, power supply quality issues, such as blackouts, brownouts, voltage spikes and dips can cause significant damage to highly sensitive areas, resulting in major disruption for companies and their customers. Keen to avoid the implications of data centre power failures, many facilities implement a full backup power or Uninterruptable Power Supply (UPS) functionality.
Historically, backup power has been provided by the use of combined heat and power (CHP) units or generators, both of which are still present across many facilities within the UK. However, even though CHP and generators provide energy off the grid and offer sufficient backup power, they are ageing systems in a technologically advanced world. They also do not allow power to be provided instantly to the load - a data centre for example – when required.
In contrast, one of the most recently discussed and effective UPS innovations is large scale battery-based energy storage technology, ranging from at least a 50kW output. Battery-based energy storage simply replicates what you would find on most electrical devices, in that the battery will charge from the National Grid, or if possible from renewable sources, and will store it to provide energy to a load, or facility, almost immediately when required.
Even though it is a modern solution, battery-based energy storage is already in operation within data centres across the world and provides a host of UPS benefits. A bespoke engineered storage solution will constantly monitor and measure electricity supply to the load, and will intelligently recognise when support is required, such as during high demand periods on the National Grid or when power quality dips. When the support is triggered, the batteries will respond automatically within a three-millisecond timeframe, providing electricity to the sensitive load during a period of up to two hours.
The nature of battery-based energy storage technology also provides a ‘future proof’ solution for high-technology critical locations, such as data centres. Frequent changes in the processing power and speed of IT equipment, along with a range of electrical equipment connected to wireless controls and the ‘Internet of Things’ (IoT) means more scalability is required. By having a bespoke engineered energy storage solution, provided by a company that has undertaken a full site survey prior to installation, it is a simple process to understand the capacity required by the current energy storage system and plan for any future batteries that may need to be installed and connected if and when demand increases.
Alongside the requirement for UPS, some of the most widely recognised and popular energy providers have been increasing tariffs for electricity use in both commercial and residential properties, even though Ofgem has recently reported that there is no clear reason why this should be the case.[1] What’s more, the National Grid is compounding energy price concerns faced by businesses that consume moderate to high levels of electricity, such as data centres, by increasing DUoS (Distribution Use of System) and Triad tariffs.
[1] https://www.ft.com/content/eadef124-de59-11e6-9d7c-be108f1c1dce
DUoS and Triads are established tariffs that the National Grid place on businesses for consuming energy at periods of high demand throughout the day. The charges can account for approximately 15-19% of a typical non-domestic electricity bill and seem to be unavoidable, as they are charged by the Distribution Network Operator that has a local monopoly on the supply of electricity.
To avoid DUoS and Triad tariffs some companies will reduce or switch off all electrical equipment at peak tariff times, usually Monday – Friday between 16:00 – 19:00 hours. However, it is evident that a shutdown procedure cannot be implemented in critical data centre facilities.
With DUoS tariffs being published in advance, the charge can be completely avoided by using energy storage technology. Such solutions can store the less expensive electricity generated at night or during off-peak periods, usually from 00:00 – 07:30, 21:00 – 24:00 and across the weekend. The battery technology will then be able to discharge the stored energy at a DUoS period, allowing companies to save up to 24% on electricity cost. Triads are more difficult to predict but due to available data, both current and historic, can still be very accurately predicted, allowing energy storage solutions to bring companies off the grid when a period is highly likely.
A bespoke energy storage solution does not only allow companies to use the stored energy to make savings, it also enables them to redirect electricity back to the National Grid, in order to generate additional revenue through Demand Side Response (DSR) incentives.
Supporting grid capacity through DSR using energy storage has the benefit of being significantly cheaper than maintaining electricity use through periods of high demand hours and, unlike diesel generators and CHP units, energy storage mediums have the ability to be connected to the National Grid, allowing instant electricity discharge. As a result, the technology will ensure businesses successfully respond to the majority of all DSR events.
However, as new incentives keep being added to the DSR scheme, it is likely that battery-based energy storage technology will be one of the only mediums that will be able to apply for future benefits, as it is a clean form of energy that can respond to changes in grid frequency within an 11-millisecond timeframe.
As the importance of UPS support for data centres continues to grow, it is critical for companies to use a reputable provider that can deliver a fully bespoke engineered solution, such as energy storage technology. Modern battery-based energy storage systems, in particular, provide a host of benefits for companies, including scalability, future security and a host of financial incentives.
Powerstar is a market leader in the industry, delivering a range of bespoke solutions that are designed and manufactured in the UK. For more information on voltage optimisation and energy storage visit the Powerstar website at www.powerstar.com
Crossrail selected ServiceNow to replace outsourced IT support and management system
Crossrail Limited is building a new railway for London and the South East, running from Reading and Heathrow in the west, through London to Shenfield and Abbey Wood in the east. It is delivering 42km of new tunnels, 10 new stations and upgrading 30 more, while integrating new and existing infrastructure.
The £14.8 billion Crossrail project is Europe’s largest infrastructure project. Construction began in 2009 at Canary Wharf, and is now more than 80% complete.
The new railway, which will be known as the Elizabeth Line when services begin in central London in 2018, will be fully integrated with London’s existing transport network and will be operated by Transport for London (TfL). New state-of-the-art trains will carry an estimated 200 million passengers per year. The new service will speed up journey times, increase central London’s rail capacity by 10% and bring an extra 1.5 million people to within 45 minutes of central London.
Crossrail plans to handover the new railway and all assets, including IT, to Transport for London once works are complete. Crossrail worked with Fruition Partners to implement ServiceNow to provide IT support during construction right through to handover.
Crossrail’s project nature means there is a high volume of joiners and leavers as employees are on-boarded to deliver contractual requirements and off-boarded once works are complete. This directly translates into a higher than normal volume of IT requests for new starters, movers and leavers amongst other IT requests.
Crossrail implemented a self-service solution that allows its users to easily make IT requests and ask for help, automating the delivery of requests for improved service and reduced operational cost.
Having decided to bring IT support in-house, ServiceNow was selected as the chosen solution through the standard procurement process using the Government’s Digital Marketplace (G-Cloud Initiative). ServiceNow demonstrated the best value for money and quickest return on investment allowing for quick adoption to generate savings before the transfer of assets to TfL in 2018.
Alistair Goodall, Head of Applications and Portfolio Management for Crossrail Ltd said, “ServiceNow is unique, with the grounded architecture we were looking for in terms of SaaS. It was our chosen solution for a quick route to market, with competitive prices making it the most cost-effective solution for Crossrail.”
Using the Governments G-Cloud Initiative Fruition Partners were chosen to implement the solution as a Gold Sales and Services Partner of ServiceNow. “They helped keep us on the right track throughout, and their workshops and training ensured we got the knowledge transfer we needed into our own team” said Alistair Goodall.
One of Crossrail’s key requirements was an ability to go live rapidly and then evolve where necessary because it was important to achieve payback well before handover of assets to the future operator.
By using out of the box solutions wherever possible, such as Service Request and Incident, the Discovery phase of the project was completed within 70 days, with Phase 1 going live within less than eight months, delivering Service Requests and CMDB. Phase 2, providing Incident Management and Change then went live shortly after that.
The roll-out to 3,000 staff across sites across London was facilitated by a programme to educate the site administrators, supported by Fruition Partners. Referring to workshops held as part of the project, Alistair Goodall says “The construction site administrators were key to the success of the project and they were pleasantly surprised by how intuitive the system was to use and how it would make their lives easier.”
The user interface is a significant part of this: at Fruition’s suggestion, the design was approached from an end-user perspective, rather than following strict ITIL principles of ‘ticket classification’. Instead, the ethos is oriented to enable the user to ‘tell me your problem in the easiest way possible’.
At a headline level, Crossrail has achieved a payback on the project within a year, using a platform that has also improved end-user experience. Crossrail’s annual survey showed happier users and, according to Alistair Goodall, “a more positive attitude towards IT as a whole: it’s not seen as a cumbersome giant anymore.”
In particular, he points to the chat function in ServiceNow which has been very popular and has changed the nature of interactions with users to bring them closer to the support teams.
In addition to cost-savings, metrics such as increases in the number of self-service tickets and decrease in phone calls to the service desk have also demonstrated the success of the system as “it’s easier to use self-service than to pick up the phone”, says Goodall.
Suhran Miranbeg, Solution Architect, who runs the managed services operation comments that implementing ServiceNow has improved the functioning of his team, as they can work more closely on other business objectives as rate of phone calls on the service desk has decreased. In addition, thanks to the ServiceNow reporting capability, Crossrail is now able to figure out trends, quickly identify issues, find solutions and continue improving user experience.
As the Crossrail project nears completion and the handover of assets required for the operation of the railway begins, the number of employees remaining will be a small fraction of the current workforce who will be responsible for the close out of contracts. The IT function will be handed over to TfL, and in the interim the ServiceNow application will be vital in that handover process. The CMDB functionality will be used to close warranties, identify data required for the future and to manage transfer for operational purposes, along with storing compliance documents and processes. As Alistair Goodall puts it: “We’ll be using ServiceNow to identify what we’ve got, what’s useful and what we should keep.”
Fruition Partners will continue to provide support and training during this process, supplementing the resource of Suhran Miranbeg’s team. Overall, Alistair Goodall’s view of the project is “This has been a painless implementation, without huge overhead, which has really delivered results, and which will play a vital role in the wind-down of Crossrail.”
Business today has become an intricate network of companies, people and processes – all focused on meeting customer demand. Business processes are no longer rigid chains – but dynamic networks and information flows, crucial in keeping companies intimately connected with staff, customers and suppliers.
By Darren Watkins, managing director for VIRTUS Data Centres.
Whilst it’s widely agreed that the most competitive business is one that is fully connected with its own systems, cloud solutions, mobile workforce, supply chain, partners and customers, many companies remain largely disconnected. They are restrained by organisational silos and boundaries, by truncated processes and by legacy operations. Disparate information systems are unconnected, and the better external links which interconnectivity promises remain unrealised.
At VIRTUS, we believe that the promise of interconnected enterprises can no longer be ignored. Making connections is vital to accelerating business performance and create new opportunities. Organisations can either seize the opportunity, or risk being left behind by more innovative companies, committed to improving the connections they make.
The change required to become better connected is significant. Innovation has historically happened at a discreet product or operational level, interconnectivity, by its nature, has to take place across an entire business - and beyond. Companies need to understand how they can change their whole ecosystems - and so, the need for guidance and support from experts has never been greater.
Exploring what an interconnected future looks like is best achieved through learning from the successes and failures of others. Analysts, tech experts and commercial businesses need to open their doors to share experiences - and to move away from a closed environment where collaboration isn’t encouraged and innovation happens in silo. It’s only then that the interconnectivity model will mature, and whole industries will reap the rewards.
For the technology industry that serves these businesses, the message to work together is clear. There is a collective responsibility to foster greater collaboration in order to unlock the potential of technological change. When tech advances, whole industries do, and change can’t happen in silo. While the digital disruption which leads to an interconnected business represents a ‘do-or-die’ tipping point for business, the opportunity is a positive one - and our collective help is the crucial element in driving a culture of progress and innovation.
So how do businesses start moving along their journey towards making better connections? The first step to success is in looking for the right vendor to help. Technology is the enabler of the interconnected company – and the right technology partner is the backbone to any successful connected business.
Whether you choose colocation, fully managed connectivity within a third party data centre, or a cloud-based service, the best tech providers help you make connections at a rate which you can’t do alone.
Companies must expect measurable and predictable returns on technology investment, and as such a close relationship with their tech provider is crucial. We’d advise any company to be very clear on objectives and timelines, and because the nature of interconnectivity is unpredictable – to take control where they can. SLAs, guarantees about uptime and reliability promises are crucial.
At VIRTUS, our customers have the choice to design and create interconnections based on specific needs or can take advantage of the fact that VIRTUS’ data centres are already interconnected within many service providers own metro connected portfolios providing near instant, scalable high bandwidth. These pre-connected solutions also come with the benefit of fast provisioning times and large scale pricing advantages for Fibre, Wavelengths and Ethernet services.
Of course, this isn’t an overnight revolution. While the shift to a connected ecosystem is a profound one, companies need to be aware that they don’t have to immediately make a revolutionary break from the past. But they must be prepared to make incremental and forward-looking changes necessary to capitalise on the interconnected future. They must learn lessons from others, and look for measurable ROI from any change they make.
Ultimately, though, these small steps can reap big rewards – as the boundaries within and among companies begin to evolve. Organisational and functional silos will give way to dynamic networks that span conventional boundaries. Streamlining processes will reduce costs, enhance quality and speed up operations. And it’s here that the productivity wars will be fought and won.
Life has become strange recently. On the world stage all the talk is about building boundaries, walls and restricting trade. Yet in business, we’re still discussing the benefits of breaking down barriers, integrating siloed information, open systems and global markets.
By Sean Harrison-Smith, Managing Director, Ceterna.
We’re even experiencing an end to the traditional 9 am to 5 pm office day as a more and more opt to work remotely. In other words, conventional restrictions are being smashed and the momentum is so strong that it’s hard to imagine it won’t continue to grow.
But then who knows? As we have learnt over the past few years, anything can happen and businesses need to be prepared for a potential bumpy ride. But the difference now, to whenever this has happened previously is that advances in analytics and artificial intelligence (AI) means better-informed decisions leading to less risk-taking.
According to Salesforce, despite all the data we are currently creating, less than 1% is currently analysed and half of all business decisions are made with incomplete information
But perhaps the main step change is not that AI, for example, exists at all – after all it’s been backbone of science fiction for decades. The leap is that it’s now accessible to smaller businesses, providing tools that don’t just pull information out of data, but push information to you, anticipating what you are going to want to know.
So what has brought about this change? In the past there has been four key challenges to using AI in business:
Most business data sits in a maze of internal and external systems and a mix of cloud and on-premise systems which don’t communicate, leading to siloed data and questionable data quality. Cloud-based CRM solutions are designed to connect all of that data to create a single view of each individual customer. This connected approach to data is essential to optimise the AI opportunity.
Unanalysed, unused data is worthless. So data sitting in these silos is no good to anybody. But neither is data that nobody can analyse or make meaningful in any way. Data scientists are like gold dust and can ask for their weight in gold accordingly. But today’s new AI tools are making it possible for businesses to work without them, although I don’t think they’ll be taking a pay cut soon. The best new platforms offer native data preparation, saving time and resources by eliminating the need for ETL
Previously the kind of computing system needed to run machine-learning algorithms would have been prohibitive for small businesses to buy. However, cloud computing has made this computing power more accessible and affordable.
Until recently, AI was something that existed in books and in films – nothing to do with business. And yet why are the tech giants all developing their own form of AI – for example Salesforce and Einstein, IBM and Watson? Because they see the huge potential, of course.
But how does AI differ from just analysing data? Algorithms adapt to data, developing behaviours not programmed in advance, but learning to read and recognise context. Inherent in this is the ability to make predictions about future behaviour to know the customer more closely and to be proactive rather than reactive.
And how does it work in practice? For example, a manufacturer may be thinking of increasing production – instead of just shouldering the risk, increasingly they will have metrics and predictive data to tell them whether or not this is a good idea.
On the sales front, reps can be ‘pushed’ proactive information on their day ahead on a smartphone or tablet. Key customer meetings are organised in priority of opportunity value and along with each of their top three pain points plus practical information such as directions to a customer site (which is also pre-programmed into the rep’s sat nav). This information is dynamic; for example, suddenly there is a notification; a top customer has just made an important acquisition. Automatically the rep is sent the top trending news articles on the topic with product recommendations that integrate with the acquisition.
Meeting notes are uploaded and the system automatically extracts action items and suggests next best actions.
There’s little doubt that the impact of AI for business will be pervasive and cover many areas including sales, service, marketing, manufacturing and even IT where it can be used to build smarter, predictive apps faster.
Salesforce may be the first of the big names to consolidate all their acquisitions and bring the results to market but the software is still evolving and will, no doubt continue to do so. Salesforce partners have a huge role to play here in working with customers to first show them that AI will complement their skills to help them work and act smarter and then help them implement the technology in the best way for their organisation.
It’s not surprising that some organisations are wary: the business world has been battered by successive waves of new technologies over the past few years. But perhaps taking the risk now, may lead to fewer risks in the years to come.
Sponsor Video
Seasoned industry professionals will recall the excruciating days of installing and connecting countless fibers, one at a time. As the number of data centres grew exponentially in the 2000s, designers and installers were tasked with managing hundreds and even thousands of single- and 2-fiber connector solutions. Accommodating the high volume of connectors within ever-tighter space constraints required more elaborate storage and routing solutions, which came with their own sets of challenges.
By David Kiel and David Kozischek, Corning Optical Communications (both pictured) and Mike Hughes, USConec.
Fast forward to 2016 and those days are fortunately long gone, largely thanks to the emergence of the multi-fiber push-on (MPO) connector. The MPO format has dramatically reduced the amount of time, effort, and space required to install and deploy network technologies, particularly in parallel optic applications.
With continual improvements, the MPO format deserves to be an essential part of any data centre build-out.
To understand how far the MPO connector has come, it’s worth reflecting on the technology’s introduction in the early 1990s. At the heart of the MPO connector lies mechanical transfer (MT) ferrule technology, first developed in the mid-1980s for use in consumer telephone services. This MT ferrule technology became the basis for the first MPO connector.
The timing couldn’t have been better. Networks were being tasked with transmitting more data, more quickly. As the need for bandwidth increased, the industry began moving toward networks and cabling with higher fiber densities – the multilane highway of data transmission. Because of the increase in “lanes” used with parallel optics, an efficient, high-density interconnect was needed. The MPO connector format succeeded in establishing a compact means to efficiently couple and decouple the high-density MT ferrule format via a bulkhead-mounted coupler.
While the MPO connector met many of the challenges of installing and deploying the latest network technologies, more fibers also meant more installation considerations. Before advancements to the MPO format, it typically took two installers a full day to terminate and test a standard 144 core fiber optical cable. Before long, installers had the ability to rapidly connect eight to 12 fibers at a time with the snap of a tool, or using a pre-terminated plug-and-play cable, trimming a daylong job to just a few hours.
Corning and US Conec have taken a leading role in developing an MPO connector that meets the installation challenges presented by ever-increasing quantities of fibers. In 1996, the MTP® connector brand was launched and has since undergone continual improvements to meet the evolving challenges of the industry. Key advancements have included refinements to ensure lower insertion loss, significant boosts to stability and a migration to polyphenylene sulfide (PPS) thermoplastic injection molding, which is much less susceptible to degradation caused by moisture absorption.
We’ve come a long way since that initial MT ferrule technology used in Japanese telecom networks. But the MTP® format is just getting started. Today, the challenge faced by the industry is the emergence of hyperscale, big data, and cloud data centres: How do we provision, add, and support high-density, bandwidth-greedy applications that require massive space to accommodate a massive number of cables? For Corning, these challenges have meant a focus on ever-improving insertion loss, fiber density, ease of installation and stability to ensure that the latest MPO connector technology is ready to meet those demands.
MPO evolution is not just focused on the mega-cloud, big data, and hyperscale computing. The latest technology has been designed to work not only with true fiber-to-fiber connections, but with a host of other technology and electronics across all vertical industries – financial, medical, educational, co-location, and more. So whether installers are working with duplex, 8-, or 16-fiber transmissions, the connector scales to whatever technology is being used – including new parallel applications such as 400Gb Ethernet capable of running across 32, 16, and eight fibers.
Beyond this, the current generation of the MTP® connector brings novel features and functionality that simplify field configurability. Not having the right male or female end to hand is no longer a problem. The latest MTP® connectors make it easy to change gender and polarity in the field, without requiring a specialized skill set or a connector engineer. Along with optimized field configurability, the connectors also feature environmentally friendly performance enhancements that improve the feel of plugging and unplugging.
With their 20-plus-year history of performance, ongoing improvements, and the next generation of advancements soon to come, MPO connectors still deliver exceptional value for a vast range of network technologies. Installers must stay in tune to with the latest innovations to this essential technology and take full advantage of the time savings, space efficiencies, and simplicity that they can bring.
By 2018, two thirds of enterprises will experience Internet of Things (IoT) security breaches, according to a recent report by ForeScout. The rapid rise in IoT, as well as other practices such as remote working, is presenting more ways for hackers to attack business networks. New connected devices are extending the attack surface for many enterprise networks and IT managers now have an even larger influx of security challenges to deal with.
By Hubert Da Costa, VP EMEA at Cradlepoint.The proliferation of IoT has opened the door to an onslaught of attacks on the devices and the web-based management platforms that run them. The security issue lies with the devices themselves. For years, companies have been producing consumer-grade devices with a focus on areas such as productivity, customer experience and revenue streams — but very little on security.
Many of the sleek, lightweight IoT devices made in the past few years are inexpensive and powerful enough to perform a series of specific functions, but are vulnerable to web application attacks or simple password brute force attacks. Often they run very simple operating systems, which lack even the most basic security tools, such as the ability to upgrade firmware if a security issue is discovered.
Like IoT, workforce mobility provides some of the biggest opportunities and challenges for enterprise networks. The bottom-line benefits of employees being able to work anywhere are clear: greater productivity during business travel, more consistent communication, workday flexibility, reduced infrastructure costs, and much more.
However, the challenges are just as evident. Employees need access to a variety of applications and documents that live either in the cloud or at the corporate data centre. Meanwhile, the IT department often must use inflexible legacy architecture and hardware to provide network and application access that is highly secure no matter where employees are working from or which devices they are using.
More enterprise traffic is being driven off private wide area networks (WANs) – like Multiprotocol Label Switching (MPLS) – and instead moving over the Internet. While virtual private networks (VPNs) offered secure connections to traditional WANs, businesses are now looking to a new type of VPN infrastructure that is more dynamic, software-defined and orchestrated. Finding a flexible alternative to traditional VPNs is important for IT departments as workforce mobility and IoT adoption becomes more prevalent. Software-defined wide area networks (SD-WAN) can help manage threats, secure device registration and get reports on aspects including network downtime, security threats and LTE data usage.
The primary use case for SD-WAN has been augmenting or replacing expensive and constrained MPLS branch networks with a hybrid WAN that includes broadband connections. However, many enterprises are shifting their WAN focus beyond the wired branch and towards the growing number of remote people and ‘things’ they need to connect over the public internet, including IoT deployments, connected vehicles, pop-up stores, kiosks, caregivers and their equipment, and even body-worn cameras.
SD-WAN can ensure users have access to important applications that live in the data centre or cloud via one tightly controlled network. It can give an organisation’s mobile workforce a secure, LAN-like connection to private and public cloud apps and files from anywhere and on any device. Enabling a software-defined overlay network can resolve the issues faced in an IoT deployment, allowing a more efficient traffic flow between the IoT devices, in-house data centre, and cloud data centre, while still maintaining security. VPN overlay networks can also be built programmatically, eliminating the configuration complexity of traditional VPNs.
While software-defined overlay networks can help, it is equally important to carefully consider network architecture for potential security holes. Businesses should be asking: where are the threats and what can we do to increase security?
Insider threats – whether malicious or non-malicious – come from bad practices within the organisation. For example, failure to educate users about careful network selection in public settings presents sizable risk. When an employee wants to work remotely from a coffee shop, airplane, or hotel room that offers free Internet, the potential for malicious activity is significant. A bad actor can pose as that location’s Internet access and serve as a gateway through which people access the web. With the ability to survey all the Internet traffic at a public location, they can control everything. It can be very difficult to detect this type of attack.
Yet the bigger issue for the company is when an employee whose device was unknowingly attacked at a coffee shop returns to the office and plugs in; now the entire company network is at risk. It is very difficult to know what network a team’s devices have been using. To help mitigate this risk, solutions that support cloud-based services such as content filtering and secure VPN to protect the corporate network should be used.
A secure network depends on everyone in the organisation doing his or her part. IT managers can routinely survey network architecture and monitor on-ramps, and cloud-based management platforms can help immensely. Mitigating security risks for distributed enterprises will require a comprehensive approach. Amid the ever-increasing importance of the IoT, it will take a combination of efforts to keep distributed enterprise networks as secure as possible.
How air filtration with honeycombs helps to prevent data loss.
By Dr. René Engelke, Freudenberg Filtration Technologies.
A world without data centers seems impossible today. Every area of our life and every industry depend on data-based IT infrastructures, which have to be stored in huge data centers. To keep both data and infrastructure protected and available, operators need to prevent corrosion damage by running their data centers under stable temperature and pressure conditions. What is often thought to be a problem confined to smog-polluted cities in Asia or humid regions in Latin America has also become a common challenge in European countries. However, there is a simple solution to providing pure air and avoiding corrosion – a new filtration solution in the shape of honeycombs.
The number of data centers has grown constantly in recent years – not only in Europe and Northern America, but also in many other regions around the globe. Thanks to digital transformation and intensified data use, the traffic and storage of data are set to increase even further. According to network supplier Cisco, total data center storage capacity will increase almost five-fold from 2015 to 2020[1]. Furthermore, an increasing number of today’s data centers are located in the heart or immediate proximity of cities such as Shanghai or Beijing, which have to cope with unexpected levels of corrosive noxious gases in the air.
Sulphur-bearing gases, such as sulphur dioxide (SO2) and hydrogen sulphide (H2S), are the most common gases causing corrosion of electronic equipment. Along with various nitrogen oxides, they are released during the combustion of fossil fuels, especially in areas with large volumes of road traffic or heating systems. This type of pollution directly affects data center operators as the infiltration of gaseous contaminants leads to electronic corrosion.
In the major conurbations of China and the U.S., experts are generally aware of dangerous corrosive gases in data centers. However, air pollution and consequently the risk of corrosion for electronic components is just as high in many European urban agglomerations. In London alone, 75 centers are operated in close proximity to the city.
So how can corrosive gases affect sensitive data? Due to cost issues, data centers are not usually hermetically sealed and air is drawn in, cooled if necessary and recirculated. Facilities are usually not operated under cleanroom conditions and operators are regularly entering via normal doors, so corrosive gases have easy access to the electronic components. To make things worse, these electronic parts have become even more sensitive over the last decade: with the introduction of the European RoHS guidelines in 2006, the use of certain hazardous substances in electrical devices has been restricted, which in turn has led to changes in the compounds used for electrical parts. Unfortunately, the new substances react far more readily with noxious gases, and consequently corrosion levels have increased.
The reduction in the size of circuit board features and the miniaturization of components necessary to improve hardware performance also make the hardware more prone to attacks by corrosive particles and gases in the environment. In addition, temperature has a significant influence on the level of corrosion. Today, data centers are frequently operated at higher room air temperatures to save energy costs – while in the past operating temperatures were between 20 and 24°C, the average temperature in many data centers is now as high as 27°C. Although this may not sound alarming, we must keep in mind that a temperature increase of 10°C doubles the corrosion rate. Consequently, data centers are now more likely to be affected by corrosion, which can cause faults or equipment disturbances, reduce productivity or increase downtime and eventually lead to data loss.
Running IT systems nonstop is crucial for most organizational operations. Hence, the main concern of data center hosts is business continuity. If a system becomes unavailable, company operations may be impaired or stopped completely. Therefore, it is essential to provide a reliable infrastructure for IT operations, which will minimize the chance of disruption. A first step to preventing hardware failures in data centers is to proactively measure the air quality. In order to find a suitable preventive solution, it is necessary to assess and monitor the temperature, humidity, dust and gaseous contamination.
A simple approach to monitoring air quality is to expose copper and silver foil discs to the air for a couple of weeks, and then analyse the thickness of the resulting corrosion layer. Based on the test results, it is possible to classify the environment into one of four corrosion severity levels: G1 – mild, G2 – moderate, G3 – harsh, GX – severe. Corrosion of the sensitive electronic parts already occurs in G1 environments. Their lifespan is considerably shortened in G2 environments, while in level G3 and GX it is highly probable that corrosion will lead to damages.
Experts recommend that data center equipment should be protected from corrosion by keeping the relative humidity below 60 percent, and by limiting the particulate and gaseous contamination concentration to levels at which the copper corrosion rate is less than 300 ångström per month and silver corrosion rate is less than 200 ångström per month. Gas-phase filtration air-cleaning systems are a valuable corrective step, removing corrosive gases through the process of chemisorption or adsorption.
A common solution to eliminate acidic pollutant gases from intake and recirculated air is to use air-filtrating pellets. However, using pellets in data centers has one major disadvantage: they produce dust that will cling to the surface of the electrical parts. An additional dust filter is needed – resulting in extra costs and higher energy consumption. Freudenberg Filtration Technologies supports customers around the globe with its Vildeon system solutions for industrial filtration. As an alternative to pellets, Freudenberg offers a revolutionary, honeycomb-shaped filter technology based on activated carbon. This new technology can remove contaminant and odorous gases as well as volatile organic compounds from the air supply stream more efficiently, and is capable of neutralizing even ultra-low concentrations of gaseous contaminants.
The new Viledon Honeycomb modules are based on Freudenberg’s Versacomb technology, which works with parallel square channels through which the air passes. The channels are separated by walls made of activated carbon powder, which are less than a millimetre thick. They are held in shape by ceramic binders to prevent any dust production and to provide stability. The honeycomb structure reduces the maximum distance between the carbon and the bulk flow of the process air, allowing highly efficient interaction between carbon and air during operation at high flow velocities.
Existing systems can be easily upgraded or retrofitted with the new technology. If a data center is already equipped with an air-conditioning system, the Honeycomb modules can be installed before the air intake since they have a low air resistance. Pressure loss is only one-third that of pellet-based solutions, which means that the new solution is able to work with existing air-conditioning systems, while pellets would require a larger and more expensive air-conditioning solution due to pressure loss. Depending on the size of the data center, honeycomb modules can be flexibly combined according to the density and composition of the corrosive gases.
Once the modules are installed, no further maintenance is needed. Honeycomb filters will run for between four and five years before they have to be changed – depending on the contamination level. To monitor the corrosivity of the air in a room, an online system based on copper and silver sensors – as described before – can easily be set up. ChemWatch by Freudenberg collects the data on the corrosion status and visualises the information about the current G classification graphically. All data can be transferred to a computer, control station or smartphone, providing operators with a constant overview of the air quality. Thanks to the sensors’ corrosivity measurement, operators are able to foresee when the filter’s capacity is coming to an end and needs to be replaced.
Freudenberg also provides a comprehensive filter management package, comprising not only innovative filter solutions, but also service support and warranties. Viledon filterCair is based on specialist expertise for diverse industry sectors combined with top-quality filter solutions for any requirements. Companies like Pacific Insurance and Alibaba in Asia, already rely on Freudenberg’s Viledon Honeycomb filter solutions. Thanks to this highly efficient and reliable gas-phase filtration system, customers have been able to significantly improve the air quality in their data centers, helping ensure that their systems remain up and running without the fear of data loss due to corrosion. Based on this experience, several European data center hosts are also considering the implementation of the Honeycomb technology. Since the risk of pollution is expected to increase in European cities over the next years, a reliable filtration solution is paramount to protecting data and introducing purest air into data centers.
[1] “Cisco Global Cloud Index: Forecast and Methodology, 2015–2020”, source: http://www.cisco.com/c/dam/en/us/solutions/collateral/service-provider/global-cloud-index-gci/white-paper-c11-738085.pdf
ComputerWeekly’s Cliff Saran writes that ‘AWS Outage Shows Vulnerability of Cloud Disaster Recovery’ in his article of 6th March 2017. He cites the S3 outage suffered by Amazon Web Services (AWS) on 28th February 2017 as an example of the risks you pose by running critical systems in the public cloud. “The consensus is that the public cloud is superior to on-premise datacentres, but AWS’s outage, caused by human error, shows that even the most sophisticated cloud IT infrastructure is not infallible”, he says.
By David Trossell, CEO and CTO of Bridgeworks.
Given that the AWS outage was caused by human error, the first question I’d ask is whether blaming the public cloud for the outage is fair. The second question that I’d like to pose is: Could this incident have been prevented by using a data acceleration solution to deploy machine intelligence to reduce the potential calamities that can be caused by human error? In the case of the AWS S3 outage a simple typographical error reaped havoc to the extent that the company couldn’t – according to The Register – “get into its own dashboard to warn the world.”
With human error being at fault it’s doesn’t seem fair to blame the public cloud. What organisations like your own need - whether you are an AWS customer or not - is several business continuity, service continuity and disaster recovery options in place. They must be supported by an ability to back up and restore your data to any cloud in real-time. This means that your data and your resources should neither be concentrated in just one data centre, nor focused on one means of cloud storage. So when disaster strikes, your data is ready and your operations can switch to another data centre or to another disaster recovery site without damaging the ability of your business to operate.
Some experts believe that British Airways (BA) could have avoided its recent computer failure, which is expected to have cost £150m, if it had the right disaster recovery strategies in place. The worldwide outage on 27th May 2017 left its passengers stranded at airports, and it has no doubt damaged the airline’s brand image with newspaper reports predicting the demise of the company.
A blogger for The Economist wrote on 29th May 2017: “The whole experience was alarming. The BA staff clearly were as poorly informed as the passengers; no one in management had taken control. No one was prioritising those passengers who had waited longest. No one was checking that planes were on their way before changing flight times. BA has a dominant position, in terms of take-off slots, at Heathrow, Europe's busiest hub. On the basis of this weekend's performance, it does not deserve it.”
A couple of days later Tanya Powley and Nathalie Thomas asked in their article for The Financial Times: “BA’s computer meltdown: how did it happen? They point out that “Leading computer and electricity experts have expressed scepticism over British Airways blaming a power surge for the systems meltdown that led to travel chaos for 75,000 passengers worldwide over the weekend.” Although BA has denied that human error was at fault, most experts don’t agree with the company’s stance.
Powley and Thomas explains: “Willie Walsh, chief executive of IAG, BA’s parent company, attributed the problem to a back-up system known as an “uninterruptible power supply” — essentially a big battery connected to the mains power — that is supposed to ensure that IT systems and data centres can continue to function even if there is a power outage.” Experts said that UPS systems rarely fail, and even when they do they shouldn’t affect the ongoing mains supply to a data centre. BA claimed that the incident had also caused damage to their IT infrastructure.
It now transpires that the power outage was due to a contract maintenance worker inadvertently turning off the power. Some commentators suggest that this might not be true. After all BA will want to avoid the huge cost of any potential litigation, making convenient to pass the buck. Yet as well as human error from a technical perspective being blamed, news agency Reuters points out that the company had engaged in cost-cutting exercises to enable it to compete with low-cost airline rival Ryan Air and EasyJet. This lead to many commentators suggesting that BA has taken too many short cuts to achieve this aim – resulting in an inability to keep going in the face of a systems failure.
The question still unanswered is: Why didn’t the second synchronised data centre that BA has a kilometre away kick-in like it should have done? The whole point of running two data centres that close together is just for this situation. That’s the elephant in the room, and it’s a question no one seems to be asking.
Speaking about the AWS incident, David Trossell, CEO and CTO of Bridgeworks comments: “Artificial intelligence (AI) is no match for human stupidity. Why do people think that just because it is “in the cloud”, they can devolve all responsibility to protect their data and their business continuity to someone else? Cloud is a facility to run your applications on – it is still up to you to ensure that your data and applications are safe and that you have back-up plans in place.” Without them you’ll have to suffer the consequences.
So the weakness doesn’t necessarily lie in the public cloud. “Someone made a mistake – someone can make the same error on premise: The difference is that one storage method has a wider impact, but for the individual company the effect is the same”, he says before asking, “Where was their DR plan?” He also points out that: “Companies invest in dual data centres to maintain business continuity, so why do they think that only having one cloud provider gives the same level of protection?” It quite clearly doesn’t. With data driving most businesses today, uptime must be maintained and prioritised.
So fault whenever an outage occurs often lies with us humans – from configuring a network poorly to poor software development. In wide area network (WAN) terms it can lie in how the network and the interconnecting elements are managed. So to reduce human error there is a need to deploy machine learning. After all, machines don’t and can’t make typographical errors. Instead they can support us, enable us to focus on more strategic business and IT activities by automating – for example – the configuration of a network to reduce the impact of data latency and to reduce packet loss. In other words, machine learning and AI can make us more efficient.
As for public clouds, they have become more popular in spite of previous concerns about security. Trevor James’ article headline in Techtarget, for example, say ‘Healthcare’s public cloud adoption highlights [the] market’s maturity’. This is a market that often lags behind the adoption of newer IT, and its perception about cloud computing has allegedly been no different. Yet James says that the use of the public cloud in this sector has accelerated. One of those providers is AWS.
Over the last few years cloud providers have addressed their customers’ security concerns by adding a number of tools to permit the encryption of data when it’s at rest and in transit. In the US, for example, once these issues were addressed it became possible to talk about uptime, disaster recovery, security and to ask questions about whether there is a need to develop expertise to run first-class data centres. If the latter isn’t feasible, then it’s a good choice to outsource to a data centre that already has the skills and resources to help you to maintain both your organisation’s data security and uptime.
However, organisations still need to take a step back. Trossell warns: “Public cloud has seen a massive expansion lately, but companies are throwing out the rules, procedures and procedures that have stood the passage of time and saved many originations from ruin.” He says backing up is about recovery point objectives (RPOs) and about recovery time objectives (RTOs). RPOs refer to the amount of data that’s at risk. They consider the time between data protection event, and they refer to the amount of data that could be lost during a disaster recovery process. RTOs refer to how quickly data can be restored during disaster recovery, to ensure your business remains operational.
“It is no good having the data in the cloud if you can’t recover it quickly enough to meet your RTO and RPO requirements”, says Trossell before adding: “Too many organisations are turning a blind eye to this, and one copy in one place is not a level of protection that most auditors should agree with.” Using this situation as an example, he asks: “What would you do if AWS lost your data?”
Without having the ability to restore your data from several sources, your organisation would suffer downtime. This can in some cases lead to financial and reputational damage, which should be avoided at all costs. He therefore advises you to work with cloud providers that have service level agreements in place that guarantee that your data will always be recoverable whenever you need it.
Furthermore, with regards to whether public cloud is superior to on-premise data centres, he says: “Clouds provide an invaluable service, but it is not the right answer for every circumstance. They are not efficient and cost effective for large long term use.” The concerns over cloud security haven’t gone away too, and particularly because there is a shortage of people with cyber-security skills. Shadow IT is another issue that is pulling back cloud adoption – and even though some types of cloud are more secure than others. Organisations are also equate the cloud with a loss of control over their IT. These factors therefore need to be considered when you to decide what should or shouldn’t go into a public cloud, a private cloud or a hybrid cloud.
According to Sharon Gaudin’s article for Network Asia of 28th April 2019, ‘IT leaders say it’s hard to keep the cloud safe’, cloud adoption is slowing down rather than accelerating because of it. In spite of this 62% of companies, reveals a recent survey by Intel of 2,000 IT professionals working in different countries, are storing sensitive customer in the public cloud. You could therefore argue that public cloud and cloud adoption overall creates a number of contradictions. On one hand recent trends have seen an uptake in the public cloud, but on the other hand the same issues still arise. These can have the effect of slowing down cloud adoption – and not just to the public cloud.
Yet in the case of the S3 AWS outage, blame needs to lie with human error and not with the public cloud. Trossell therefore concludes that you should consider data acceleration solutions such as PORTrockIT – the DCS Award’s ‘Data Centre ICT Networking Product of the Year’ to remove the human risk associated with, for example, the manual configuration of WANs. They can also help your organisation to maintain uptime by enabling real-time back up at speed by mitigating the effects of latency and by reducing packet loss. They can permit you to send encrypted data across a WAN too, and your data centers needn’t be located next to each other within the same circles of disruption because they can be many miles apart from each other.
So, with them in mind, the public cloud certainly isn’t outed for disaster recovery. It can still play an invaluable disaster recovery role. With data acceleration supported by machine learning you will be able to securely back up and restore your data with improved RPOs and RTOs to any cloud. You also won’t have to suffer downtime caused by a simple typographical error. The network will be configured for you by machine learning to mitigate data latency, network latency and packet loss.
The countdown to the European Union’s General Data Protection Regulation (GDPR) has begun and the clock is ticking fast. While the media is abuzz with commentaries, guides, and solutions for the GDPR's guidelines, conclusive interpretations of its various aspects have yet to be reached. The basic intent of the GDPR, however, is crystal clear: data protection—more specifically, making personal data secure.
By V Balasubramanian, Product Manager, ManageEngine.
The term personal data assumes extremely broad coverage in the GDPR—any data that relates to "an identifiable natural person" is classified as personal data. Organizations usually digitally process and store things like customer names, email addresses, photographs, work information, conversations, media files, and a lot of other information that could identify individuals
Personal data is all-pervasive, and is found in nearly every piece of IT. If your organization wants to comply with the GDPR, then you need to define and enforce strict access controls as well as meticulously track access to data.
Privileged access and threats to data security
Cyber attacks can originate both from within the perimeters of an enterprise, and from outside. Analyses of the recent high profile cyber attacks reveal that hackers—both external and internal—are exploiting privileged access to perpetrate attacks. Most attacks compromise personal data that is processed or stored by IT applications and devices. Security researchers point out that almost all types of cyber attacks nowadays involve privileged accounts.
Privileged accounts—the prime target of cybercriminals
In internal and external attacks alike, unauthorized access and misuse of privileged accounts—the "keys to the IT kingdom"—have emerged as the main techniques used by criminals. Administrative passwords, system default accounts, as well as hard-coded credentials in scripts and applications have all become the prime targets cyber criminals use to gain access.
Hackers typically launch a simple phishing or spear-phishing attack as a way of gaining a foothold in a user's machine. They then install malicious software and look for the all-powerful administrative passwords—which give unlimited access privileges—to move laterally across the network, infect all computers, and siphon off data. The moment the hacker gains access to an administrative password, the entire organization becomes vulnerable to attacks and data theft. Perimeter security devices cannot fully guard enterprises against these types of privilege attacks.
Third parties and malicious insiders
Organizations are required to work with third parties such as vendors, business partners, and contractors for a variety of purposes. Quite often, third-party partners are provided with remote privileged access to physical and virtual resources within the organization.
Even if your organization has robust security controls in place, you never know how third parties are handling your data. Hackers could easily exploit vulnerabilities in your supply chain or launch phishing attacks against those who have access and gain entry to your network. It is imperative that privileged access granted to third parties is controlled, managed, and monitored.
Additionally, malicious insiders—including disgruntled IT staff, greedy techies, sacked employees, and IT staff working with third parties—could plant logic bombs or steal data. Uncontrolled administrative access is a potential security threat, jeopardizing your business.
Begin your GDPR journey with privileged access management
Control, monitor, and manage your organization's privileged access
The GDPR requires that organizations ensure and demonstrate compliance with its personal data protection policies. Protecting personal data, in turn, requires complete control over privileged access—the foundational tenet of the GDPR. Controlling privileged access requires you to:
● Consolidate all your privileged accounts and put them in a secure, centralized vault.
● Assign strong, unique passwords and enforce periodic password rotation.
● Restrict access to accounts based on job roles and responsibilities.
● Enforce additional controls for releasing the passwords of sensitive assets.
● Audit all access to privileged accounts.
● Completely eliminate hard-coded credentials in scripts and applications.
● Wherever possible, grant remote access to IT systems without revealing the credentials in plaintext.
● Enforce strict access controls for third parties and closely monitor their activities.
● Establish dual controls to closely monitor privileged access sessions to highly sensitive IT assets.
● Record privileged sessions for forensic audits.
As explained above, controlling, monitoring, and managing privileged access calls for automating the entire life cycle of privileged access. However, manual approaches to privileged access management are time-consuming, error prone, and may not be able to provide the desired level of security controls.
Market is abound with automated privileged access management solutions, which can empower you to achieve total control over privileged access in your organization, thereby laying a solid foundation for GDPR compliance.
Though fully complying with the GDPR requires a variety of solutions, processes, people, and technologies, automating privileged access management serves as the foundation for GDPR compliance. Together with other appropriate solutions, processes, and people, privileged access management helps reinforce IT security and prevent data breaches.
DCS talks to Universal Electric Corporation’s Director of Marketing, Mark Swift, about the company’s work in the field of power distribution and how this is helping data centre owners and operators address some of their current pain points around energy efficiency, data centre design and hot topics such as The Cloud and Big Data.
1. Please can you provide some background on Universal Electric Corporation (UEC) – its formation, how the company has developed and the key personnel?
Universal Electric Corporation (UEC), the manufacturer of U-S Safety Trolley and Starline suites of products, is a global leader in power distribution equipment. Founded in Pittsburgh, PA USA in 1922 as an electrical contracting firm, the company began manufacturing in the late 1950’s, developing mobile electrification products under the U-S Safety Trolley name. The company’s focus on innovation continues to pave the way for safer, flexible and reliable power distribution systems.
The U-S Safety Trolley products have been providing safe, mobile power to cranes, assembly lines, amusement park rides and a whole host of other applications for years. Developed initially as a safer and more reliable method to power cranes in steel mills and other manufacturing environments, the solid copper conductor bar designs from our various systems can deliver joint free power for lengths far superior than other products.
For more than 25 years, Starline Track Busway has provided mission critical and data centre facilities with the most flexible, reliable and customisable power distribution systems on the market today. Target markets were more industrially focused in the beginning, but expanded into the laboratory, data centre and mission critical applications in the early 1990’s. Other newer Starline products include Plug-in Raceway, 390Vdc Solutions and the Critical Power Monitor (CPM).
2. What does UEC, manufacturer of Starline, offer the data centre market?
The main product UEC offers the data centre space is Starline Track Busway. The benefits of the Track Busway product include reduced facility construction costs, faster installation, adding flexibility for the future, and your ability to customise solutions to fit your needs. Plug-in units can be disconnected and connected without de-energising the busway and the product requires no routine maintenance and is faster and less costly to expand or remodel.
3. And how does UEC distinguish itself in what is quite a crowded marketplace?
UEC distinguishes itself on our outstanding reliability and willingness to perform customisations of our products.
4. And how do you see this marketplace developing over time, in terms of the number of companies involved?
More and more competitors are entering the space. There have been several new entrants within the past five years. Since this market is continuing to grow.
5. Can you talk us through the Starline portfolio – the track busway and the critical power monitor being the main two products?
Yes, the Starline Suite of Products actually consists of four offerings. Track Busway is the largest and most seasoned, having been in the market for almost 30 years. We also offer Plug-in Raceway, which is a wall or perimeter mount product, with a similar feature set as the Track Busway. With many new high-voltage dc applications being implemented around the world, we have also designed our 380Vdc Solutions products for these types of applications. The newest product is the Starline Critical Power Monitor (CPM). This revenue-grade metering platform offers an enhanced monitoring package that will allow you to monitor, integrate and display more data centre power information easily and reliably.
6. And are there any plans to introduce any product upgrades and/or any new product lines over the next couple of years?
To further enhance the Starline metering offering, UEC has introduced corded and retrofit Critical Power Monitor (CPM) units. This allows non-metered legacy plug-in units to be upgraded, whether in the field or at a UEC manufacturing facility, to incorporate metering functionality into their power distribution systems. Furthermore, both corded and retrofit CPM units are capable of being installed with other manufacturers’ power distribution systems, not just Starline systems.
Also, UEC has taken the Starline Track Busway system to new heights (or amperages) with the addition of 1200 amp busway. This more robust system is an ideal solution for industries with higher density load requirements, yet is still compatible with Starline Plug-in Units that are used with other, lower amperage Starline Track Busway systems.
In order to keep up with the growing global demand for Starline products, UEC opened a manufacturing facility in Slough, England to better serve its European customer base. The company is also currently exploring options for how to better attend to the needs of the expanding markets in Southeast Asia.
7. Moving on to some industry issues – how does UEC help its customers address the drive towards power efficiency in the data centre?
The Starline Critical Power Monitor (CPM) is definitely contributing to the present drive towards intelligence in the data centre. Power consumption is a definite concern in the data centre market and activity represents a non-trivial portion of the power consumed in the world. This makes the data centre in every organisation the epicentre for power management issues, from costs and consumption to energy efficiency. The Starline CPM drills down to provide critical energy usage statistics so that data centre managers can analyse the information and then can act upon it. Having this data available to benchmark existing usage and determine a plan to reduce and validate power consumption will be key moving forward.
8. And does UEC have any play around the modular/prefab data centre trend?
Yes, our products are installed in several of the large global modular and prefab manufacturers. The features and design of our Track Busway products are very impactful in these applications. Since you can add plug-in units or even sections of busway as you grow and expand, there is a significant benefit to the customer when our products are included in the design.
9. Do issues such as Cloud, Big Data and the drive towards digital have any impact on the make-up of the Starline product line – either now or in the future?
Yes, because of many of these factors, we are seeing higher and higher densities in data centre applications. Our Track Busway line has versions available in ampacities of 225 to 800A. So as the demand increases, our breadth of products is set to meet the increasing power needs of our clients for these types of installations.
10. In terms of bringing product to market – do you have a mature Channel presence, or are you still looking to develop certain markets – both geographic and industry sectors?
In the US, we have a fairly mature channel presence. We are continuing to expand our footprint around the globe, with direct sales and support staff in many of the emerging markets. Data centres are only one of the areas in which we supply products today. We have a significant installed base in retail, industrial and other industry sectors.
11. After sales service and support are key differentiators – what does UEC offer around this?
We have application and sales support for our products globally. Our goal is to make sure your installation goes smoothly and our products meet your needs.
12. Any other comments?
We really try and implore a consultative approach to with customers. We are not trying to only sell you products, but sell you solutions that will address issues you might be experiencing in your facility. UEC is dedicated to a strong focus on customer experience and innovation. We are committed to sustaining our leading position within the industries we service and to enduring partnerships, both within our organisation and the external companies we supply.
Nearly 900 physicians and more than 8,000 nurses work at University of Florida (UF) Health Shands Hospital in Gainesville, Florida. The institute is among the best in the country in seven specialties - Urology, Cardiology, Neurology, Pulmonology, Nephrology, Gastroenterology and Oncology.
UF Health Shands’ new data centre, located in the new Shands Cancer Hospital, supports the entire hospital’s operation. It backs up and stores everything from patient and staff files, to security footage and accounting files. As a premiere teaching hospital at the University of Florida, an efficient, reliable operation is critical for delivering the most comprehensive and high-quality care, which is why UF Health Shands Hospital returned to Chatsworth Products (CPI) for its state-of-the-art solutions and customisation expertise.
On September 24, 2013 UF Health Shands opened its brand new data centre.
Using all Glacier White cabinets, runway and containment, the energy efficient space was ready for deployment. In addition to the CPI products used in the installation, Starline Track Busway with white plug-in units were also installed overhead, providing the necessary power to the cabinets. “Using white instead of the traditional black made it a class act and did not increase design costs,” Brad Kowal, Associate Director of Computer Operations for UF Health Shands Hospital, stated.
UF Health Shands Hospital was very familiar with CPI’s products, customisation capabilities and technical support. Located in the main part of the hospital, the legacy data centre includes black CPI F-Series TeraFrame Gen 2 Cabinets with custom Vertical Exhaust Ducts. Two of the cabinets are Glacier White to easily distinguish them from the others and indicate that they host emergency equipment, such as DMZ servers, public safety and security information. But even with this reliable architecture in place, the hospital still needed a new space with more power and cooling capabilities for its growing campus.
Initially, the room that turned into UF Health Shands Hospital’s new data centre was being used for storage. In order to support the robust IT infrastructure that the hospital needed, the 204 square metre space required a custom solution that would fit into the already existing room configuration.
UF Health Shands wanted an effective method to contain the heat from the IT equipment in the cabinets, exhausted cabinets and hot aisle containment. Additionally, the IT team had to consider air handlers and electrical gear, while maintaining enough space and flexibility in the data centre.
After scoping out the specs of the room and figuring out the requirements, Joe Keena, Data Centre Operations Manager for UF Health Shands Hospital, started to look for efficient solutions for the new space. CPI was a clear contender.
Steven Bornfield, Sr. Data Centre Consultant for CPI, explored and evaluated all of CPI’s products and solutions, showed examples of other CPI custom projects and proposed design options to the UF Health Shands IT team. In addition to the CPI products, Steven also recommended the installation of Starline Track Busway for the power distribution within the facility.
By June 2013, UF Health Shands Hospital had begun planning the new data centre, complete with CPI products.
CPI created a custom cabinet and aisle containment solution to fit in the new data centre space. The design featured CPI’s 45U F-Series TeraFrame Gen 2 Cabinets with Vertical Exhaust Ducts, N-Series TeraFrame Network Cabinets, a custom, self-supported Hot Aisle Containment (HAC) Solution, Snap-in Filler Panels and OnTrac Wire Mesh Cable Tray.
“We’re the first data centre in the state of Florida to have a free-standing Hot Aisle Containment solution,” Kowal exclaimed.
The HAC was customised to different heights, widths and depths to become the perfect solution for UF Health Shands. Equipment that had to remain in its own housing was rolled up to the HAC and fitted with panels that were cut to the correct size. “This helps maintain cooling, while accommodating vendor-supplied storage solutions,” stated Keena.
CPI’s HAC solution eliminates hot spots, improves CRAC unit efficiency and provides flexibility for supply air delivery through the ceiling, wall or floor.
The data centre has 33 cabinets that support highly virtualised application loading. Computing power is expected to average 12.5kW per cabinet with some cabinets supporting up to 25kW of electrical capacity. By not using a raised floor, the air handlers supply air to the space, reducing air handling unit power consumption and construction costs. Cabinet- and aisle-level containment strategies are used to provide closed-loop cooling to support high electric power densities.
“Once engineering was complete, we focused on aesthetics,” Kowal stated. “Being a hospital, we wanted the data centre to have a clean room feel. Having all white accomplished that extra level of aesthetics that demonstrates we take the cleanliness of our data centre seriously,” he added.
Glacier White is not only an aesthetic feature, but the colour also provides benefits, such as better visibility in the data centre, which can reduce lighting costs and contribute to the energy efficiency UF Health Shands was hoping to achieve.
“We needed to have three-phase power delivered directly to the cabinets, but in an efficient and cost effective manner,” said Keena. Since the installation in the facility is a slab on grade architecture, there was no raised floor to run cables underneath. With some of the CPI containment products being implemented as well, deploying cables overhead would have been difficult.
The design included parallel runs, one white and one black to designate between the A and B feeds, of 400 amp Starline Track Busway. The busway was installed in the rear of the cabinets, directly mounted to the drop ceiling, minimising the spacing required for the product. Customised plug-in units were also specified, with various configurations to meet the specific power requirements of individual cabinets.
UF Health Shands was able to implement busway on both the rows with long runs, and the shorter runs with containment by going with Starline overhead. Keena stated, “The flexibility added to our facility by incorporating Starline into our design will provide benefits to the data centre for years.”
“Then the fun started,” Keena said. UF Health Shands worked closely with Bornfield on the specs of arranging the cabinets in the data centre.
The N-Series TeraFrame cabinets host the networking switches and provide maximum flexibility and separation of hot and cold air within the cabinet. There are two N-Series cabinets on the end of each row, with a total of four in the data centre.
The sturdy and highly functional F-Series cabinets host servers and storage and support containment solutions, making it a smart choice for UF Health Shands’ data centre, which supports several types of equipment in one setting. CPI’s Vertical Exhaust Ducts were the perfect choice in this solution to isolate and guide hot exhaust air from the back of the cabinet to the drop ceiling plenum, creating a closed hot air return path to the cooling system.
“The plan was to utilise all Vertical Exhaust Duct cabinets but due to the constant changing environment of various devices, it was decided to use both Vertical Exhaust Duct cabinets matched with the HAC solution to allow for the cabinets and equipment that might not be able to be installed into Vertical Exhaust Duct cabinets,” Keena stated.
Custom cable openings were installed on the cabinets to allow proper power application. Both the A & B runs of Starline Track Busway were installed overhead, feeding the rear of the cabinets below. The white plug-in units have pin and sleeve receptacles on the face of the units.
CPI’s Sales and Technical Support teams assisted in the design of the space and worked closely with the engineers to help them with the products, custom solutions and installation for UF Health Shands Hospital.
“We wouldn’t have this data centre if it wasn’t for Steven. He came prepared, showed us options, and was able to walk the walk and talk the talk to the engineers. Our relationship, the responsiveness of the organisation and quality of the product is why we went with CPI,” stated Kowal.
UF Health Shands Hospital’s new data centre is running effectively with efficient cooling, zero hot spots and increased per-cabinet densities.
“CPI provided a unified solution where all of the cabinets matched and ensured that we were consistent to support current and future growth,” Keena said. “CPI provided true customer service and focus throughout the entire process,” he added.